Technology Regulation in the 21st Century: Part 1
Charting New Territory in Switzerland, the U.S., and Around the World: How should new and emerging technologies be regulated? Is it realistic to think that technology can be regulated in a meaningful way? While regulation is largely intended to protect consumers and the public, do the regulators understand the technologies they seek to regulate? These questions, and many more, are at the forefront of discussions among business leaders and government officials. Here is some insight into the current state of play in Switzerland and in the U.S.
This briefing is the first in a series on technology policy and the emerging opportunities and risks associated with the unrelenting pace of technological change. Part 1 also serves as a primer for the participants of the ASF’s inaugural “Building Bridges Tech Tour” in Silicon Valley (September 25 – 28, 2019).
By Karina Rollins
With the rise of the Internet over the past 25 years, technology is now a part of daily life in Switzerland, the United States, and much of the rest of the world.
With so much of daily life dependent on technology, significant changes are happening in the business world and in society as a whole. The increasing interconnection of everything—including the so-called Internet of Things—has increased convenience and choices for consumers, has put a virtual world of knowledge “at our fingertips,” and even allows easier and faster tracking of, for instance, the spread of contagious diseases. From everyday conveniences to life-saving measures, new technologies have made it possible.
But, as with everything, there is a downside. GPS trackers built into phones; social media that allow spying and stalking; trackable online purchases and political opinions; “Alexa” in private homes essentially functioning as an open microphone to the Web; and “big data” stored by companies and government agencies, raise legitimate concerns about identity theft, data security, and the erosion of privacy rights. Is this new technology turning the United States, and Switzerland and the rest of the developed world, into what ACLU senior technologist Daniel Kahn Gillmor calls the surveillance economy, or what Harvard Business School professor Shoshana Zuboff calls surveillance capitalism?
Ideally, regulation would offer privacy and security protections, while continuing to allow the new technologies to change the world. But, the entire world is entering uncharted territory here. Few may know how to reliably regulate this revolutionary and constantly evolving technology, but some regulations are taking shape.
The processing of personal data in Switzerland is largely regulated by the 1992 Federal Act on Data Protection (FADP). That is about to change: With the European Union’s General Data Protection Regulation (GDPR, in effect since May 2018), and with the EU’s ePrivacy Regulation in draft form (implementation planned for 2020), Switzerland has undertaken a complete overhaul and revision of the FADP.
According to an analysis by PriceWaterhouseCoopers (PWc),the “new FADP is very broad and will affect almost every company in Switzerland.” PwC identifies the five most important changes:
Sanctions.The new FADP stipulates that individuals who intentionally breach the new law will be fined up to $250,000. (Under the old FADP, fines only reached $10,000.)
Reporting data security breaches.If the security of data is breached, data controllers must report any resulting increase in risk to the safety of the affected individuals to the Federal Data Protection and Information Commissioner (FDPIC) immediately. They may also have to inform the affected individuals.
Sensitive personal data.The new FADP expands the list of data that fall under the category of sensitive personal data to include genetic data and biometric data (such as fingerprints) that unequivocally identify a person.
Technical design and default settings conducive to data protection. Following the “privacy by design” principle, data controllers and processors will have to operate under more precise due diligence obligations—they will have to include privacy protection measures as early as the planning stage of processing. Default settings will have to be set under the standard of “privacy by default,” to ensure that any personal data is processed solely for its relevant purpose.
Data protection impact assessment.If any planned data processing will involve an increased risk to individuals, data controllers and processors will have to undertake a data protection impact assessment that includes risks as well as suitable mitigating measures.
The FDPIC is the Swiss authority that supervises data processing by federal bodies and private persons, including enterprises. Personal data may be transferred outside of Switzerland if the destination country is deemed to offer an adequate level of data protection. The FDPIC publishes a list of these countries, which includes most countries in the EU and the European Economic Area.
Since 2017, this list also includes the United States. As of April 12, 2017, U.S. companies can be certified under the Swiss-U.S. Privacy Shield, thereby making themselves subject to its rules. To be certified, U.S. companies must register online with the U.S. Department of Commerce’s International Trade Administration Privacy Shield.
Then there are the moral considerations: The Swiss Digital Initiative, a public-private initiative that intends to promote ethical behavior in the digital world, was announced in early September 2019. It will formally launch on the occasion of the World Economic Forum Annual Meeting in Davos in January 2020.
Adapting Existing Regulations. Can existing regulations be applied to new technologies and concepts? Possibly, but they must be adapted. While Switzerland has a long-standing tradition of small, family owned businesses, for instance, the regulations and institutions that manage them do not necessarily apply to, or may even hinder, startup companies. When it comes to tax regulation, the EPFL College of Management of Technology explains in its 2016 analysis “Switzerland’s Digital Future,” that current taxation policies, in which startups are taxed based on their external valuation, are potentially detrimental to the Swiss startup ecosystem. Currently, when external investors make an investment that increases the value of the company, the founders often incur a large tax bill that cannot be deferred. Thus the most successful companies (the ones whose valuations are growing the most quickly) have incentives to set up operations outside Switzerland while they are raising funds.
While traditional, family owned businesses and startup companies are both small businesses, startups operate under a different process, offer non-traditional goods or services, or are entering a new and emerging market—or all three. Unlike a traditional small business, such as a dry cleaner or a café, a startup aims to be big—growth and scalability are crucial factors.
What is important for any innovative company in a quickly evolving technology environment, especially in the digital space, is flexible regulation. As the EPFL College of Management of Technology puts it:
[M]any of the attractive IT-driven markets are developing extremely quickly, while traditional regulatory approaches take a long time and are intended to last for decades. Some experimentation with regulatory “sandboxes” (don’t regulate until the market is of a certain size, or until consumers claim to be harmed) along with the ability to rapidly revisit digital-oriented regulations, could be highly beneficial to growing digital businesses and keeping them based in Switzerland while they grow.
Furthermore, regulations need not be restrictive, and instead of creating new ones, existing regulations can be loosened, and can be pro-innovation. For instance, financial technology (Fintech), as the EPFL College states, can represent an occasion for a Swiss-style financial innovation, particularly in regard to blockchain infrastructure governance and its potential application to areas such as smart cities and energy efficiency. Indeed, the Federal Council has already begun thinking about making changes in banking and finance regulations, specifically the Banking Act (BankA) to make the Swiss environment friendlier to Fintech. The Federal Council also indicated that some Fintech firms could currently claim an exception to the BankA…
The Swiss Financial Market Supervisory Authority (FINMA), for its part, wants to bring blockchain companies into compliance with existing anti-money-laundering and anti-terror-financing regulations. On the other hand, FINMA last year proposed loosening anti-money-laundering rules for smaller Fintech companies.
Dual Role of Regulation.While some oppose any regulation in general, the EPFL College sees the need for regulation, and specifies a dual role:
[G]iven the volume, variety, and velocity of big data, it is crucial to have regulations and policies in place that on the one hand enable those who will be manipulating this data to do so effectively and on the other hand protect those people whose data will be manipulated. Additionally, as when any type of information becomes massively available and to a certain extent decontextualized, it becomes important to have policies and regulations/checks that are able to verify the veracity of the data collected and used.
Digital Literacy.With steadily growing online offers (increasingly in place of the traditional ways), from buying plane tickets to accessing government services and voting, digital literacy among the public becomes ever more important. Mere access to digital devices, which Switzerland has aplenty, does not necessarily translate into digital literacy. The Swiss achieved a low overall score of 46 out of 100 for basic computer skills in the European Computer Driving License (ECDL) Foundation’s 2016 evaluation. The ECDL Foundation points out that in Switzerland, as in other digitally advanced countries, such as Germany, Austria, Denmark, and Finland, the publics, even young people, overestimate their digital skills. Digital skills will matter more and more in the “no turning back” digital age: Since no regulation, no matter how carefully thought out, will be able to offer reliable protection at all times, every user of the new technologies will have to play a part in minimizing his or her risk for data breaches and invasion of privacy.
There is no single principal data-protection law in the United States. What exists is “a jumble of hundreds of laws” on both the federal and state levels that aim to protect the personal data of U.S. residents (such asthe 1974 Privacy Act, safe harbor regulations, and the 1996 Health Insurance Portability and Accountability Act (HIPAA)).
The regulation of new technologies in the United States is similar to that in Switzerland—there is very little of it. The U.S. faces the same concerns as Switzerland about data breaches and loss of privacy. Massive data breaches have already happened in the United States. In 2015, the computers of the U.S. government’s Office of Personnel Management were infiltrated by Chinese hackers, exposing the private data of between 4 million and 21.5 million people.
One of the ways to safeguard privacy, of course, is to use encryption. The question then arises—should modern encryption technology be regulated? Just how far should privacy protection go—is it reasonable, for instance, to bar the FBI from unlocking the smart phones of known, or even suspected, terrorists or active pedophiles? Should U.S. companies that produce the encryption technology have the legal right to refuse assistance to the FBI in such cases? Should the FBI have the legal right to force U.S. companies to comply?
There are also more philosophical issues to be considered, such as whether an individual has the “right to be forgotten” online—a controversial concept that has gained traction in the EU, and is criticized, largely by free-market thinkers in the United States, as Internet censorship.
Some states have already taken action to attempt to protect privacy: The California Consumer Privacy Act is a sweeping data-privacy law that is scheduled to go into effect on January 1, 2020. (Google and other tech companies have started “a late bid to water down” the law before then.)
In an American University Business Law Review article titled “Regulation Tomorrow: What Happens When Technology Is Faster than the Law?” Mark Fenwick, Wulf Kaal, and Erik Vermeulen explore the reasons for the relative lack of regulation of new technology:
In an age of constant, complex and disruptive technological innovation, knowing what, when, and how to structure regulatory interventions has become more difficult. Regulators find themselves in a situation where they believe they must opt for either reckless action (regulation without sufficient facts) or paralysis (doing nothing). [C]aution tends to trump risk. But such caution merely functions to reinforce the status quo and makes it harder for new technologies to reach the market in a timely or efficient manner.
Technology reporter Kashmir Hill (YL 2019) sees a different reason for the lack of regulation in the U.S.:
Europe is on the case, its regulators fining Google and saying Facebook can’t combine users’ data from Facebook, WhatsApp, and Instagram without their consent. But antitrust regulators in the U.S. have stayed away from these companies because their services are cheap or free, so they’re perceived as pro-consumer, which is ultimately what regulators want to encourage. But how does that work when the “consumer” is what the company is selling?
Her solution? “[I]f we want to get away from monopolies and surveillance economies, we might need to rethink the assumption that everything on the internet should be free.”
Fenwick, Kaal, and Vermeulen focus on a different path: “[L]awmaking and regulatory design needs to become more proactive, dynamic and responsive.” Such dynamic and responsive action will require a delicate balancing act: “This entails ensuring that regulation is not adopted too soon and stifles or distorts technological development, but not so late that problems arise as a result of the absence of effect[ive] regulation.”
The three authors point out that the “time frame for rulemaking in the existing regulatory infrastructure is largely inadequate to address regulatory challenges associated with disruptive innovation.” Of course, in addition to the consideration of when to regulate, there are the other considerations of what to regulate, and how. And, given the speed of product innovation, “[n]ew regulations pertaining to an innovative product could be obsolete before they are finalized.”
Furthermore, regulators—and most other people—“simply lack the experience or imagination to predict what negative possibilities may be associated with a piece of new technology.” By definition, the “regulation of any disruptive new technology is always going to be reactive and based on an uncertain and politicized factual basis.”
Then again, regulation of disruptive technologies is not a new concept—after all, the industrial revolution was most disruptive to society, and regulators eventually figured out imperfect yet ultimately workable ways to regulate the new industries. On the other hand, today’s new and emerging technologies seem to be in a different category altogether, with even further and unforeseeable effects than the industrial revolution had.
Artificial Intelligence (AI).AI is not merely a highly complex computer algorithm. As Fenwick, Kaal, and Vermeulen explain it, “AI tries to emulate human thought processes and rational human behavior through self-learning and storage of experiences. Because it emulates human behavior, AI can act differently in the same situations, depending on the actions previously performed.” While the pace of legislation in regards to AI has been slow, this may be changing. A number of city, state, and federal regulations have been proposed or recently enacted in the United States:
Cityban on facial-recognition software.In May 2019, San Francisco became the first major city to legislatively ban facial-recognition software. Yet the ban applies only to use by police and government agencies, not to private entities—raising the question of why mega corporations should be trusted with this powerful recognition tool of private citizens, while the police, working to protect those same private citizens, is barred from using the tool to track criminals. Similar bans are under consideration in Massachusetts and elsewhere.
For years, Facebook had been automatically employing facial-recognition technology to “tag” people in photos posted on the social media platform. In early September 2019, Facebook announced that this feature will now be off by default, and users will have to specifically opt to turn it on—an apparent sign of Facebook’s awareness of increasing wariness among the public.
State regulations for using AI in hiring decisions. Also in May 2019, the Illinois General Assembly passed the—first-of-its-kind—Artificial Intelligence Video Interview Act, which imposes restrictions on employers who use “interview bots”—robots that use AI to evaluate a candidate’s facial expression, body languages, word choice, and tone of voice.
Federal Algorithmic Accountability Act.In April 2019, congressional Democrats passed the Algorithmic Accountability Act, with the aim of enhancing federal oversight of AI and data privacy (not unlike the EU’s GDPR). The law would give oversight responsibility to the Federal Trade Commission.
While data might be better protected by such proposals and laws, when it comes to protecting human life, things may look darker. As former U.S. Secretary of State Henry Kissinger warns, arms control of AI weapons, for instance, may not be possible. He is not the only one who is worried: 4,500 AI experts, 21 Nobel Peace Laureates, more than 110 NGOs, 28 countries, the European Parliament, the United Nations, and large segments of private citizens support a ban on fully autonomous weapons.
Digital Literacy. For many years, the “digital divide” was defined as the gap between those who have access to digital technology and those who do not. But, as is the case in Switzerland, mere access to technology does not necessarily equate to effective and safe use of that technology. A Pew Research Center survey finds that “Americans fall along a spectrum of preparedness…and many are not eager or ready to take the plunge.” Digital literacy relates directly to online safety: As described in U.S. News & World Report, “The elderly, those with low incomes and people who speak limited English are especially at risk for targeting and personal information breaches online because they may not understand cybersecurity safeguards as well as the tech savvy.” Of course, there are a good number of fully plugged-in young people who leave themselves vulnerable to online data breaches.
The Uncertain Future of Regulation
The information presented here barely scratches the surface of this sprawling, complex, at times confusing, quickly changing, and all-encompassing issue. So many new technologies, uses, benefits, and dangers. So many new areas for regulation, or lack thereof. So many varied and opposing opinions.
How should the exciting innovations in biotechnology be regulated? While some regulations of cryptocurrencies are beginning to take shape, how should regulators approach the broader blockchain technology?
In 2018, the European Court of Justice ruled that the relatively new “genome editing”—used to produce GMO food crops, for instance—must be subject to the EU’s strict GMO regulations. Is this overregulation or necessary consumer protection?
Should Twitter ban “hateful” speech? Who defines which speech qualifies as hateful? Is free speech in danger?
What about the fifth-generation (5G) wireless networks and their implications for national security, as well as possible health concerns?
And, to add to the difficulties—how does one country’s regulatory approach work in the international arena, where different countries have different polices as well as varying degrees of enforcement?
Regulators everywhere have their work cut out for them.
Switzerland’s Digital Future: Facts, Challenges, Recommendations, EPFL College of Management of Technology, 2016
I’m a Tech CEO and I Don’t Think Tech CEOs Should Be Making Policy, Alex Karp, The Washington Post, 2019
Will We Ban “Hate Speech”? Lessons from Europe and the Threat of Big Tech, Paul Coleman et al., Heritage Foundation panel discussion (video), 2019
The Tragedy of Tech Companies: Getting the Regulation They Want, Tom Wheeler, Brookings Institution, 2019
Why Government Shouldn’t Tell Facebook, Google, and Twitter What to Do, Diane Katz, The Heritage Foundation, 2018
How the Enlightenment Ends, Henry Kissinger, The Atlantic, June 2018
How Can We Regulate the Digital Revolution?Gary Coleman, World Economic Forum, 2017
Legal Education in the Blockchain Revolution, Mark Fenwick, Wulf A. Kaal, and Erik P.M. Vermeulen, Vanderbilt Journal of Entertainment & Technology Law, 2017
A Year After San Bernardino and Apple–FBI, Where Are We On Encryption?Alina Selyukh, NPR, 2016
Inside the Cyberattack that Shocked the US Government, Brendan I. Koerner, Wired, 2016
I Cut the Big Five Out of My Life—It Was Hell! Kashmir Hill, Gizmodo, 2019
Der perfekte Albtraum – wenn Überwachungskapitalismus und Überwachungsstaat zusammenwachsen, Eric Gujer, Neue Zürcher Zeitung, 2019