Archives For GDPR

Policy discussions about the use of personal data often have “less is more” as a background assumption; that data is overconsumed relative to some hypothetical optimal baseline. This overriding skepticism has been the backdrop for sweeping new privacy regulations, such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR).

More recently, as part of the broad pushback against data collection by online firms, some have begun to call for creating property rights in consumers’ personal data or for data to be treated as labor. Prominent backers of the idea include New York City mayoral candidate Andrew Yang and computer scientist Jaron Lanier.

The discussion has escaped the halls of academia and made its way into popular media. During a recent discussion with Tesla founder Elon Musk, comedian and podcast host Joe Rogan argued that Facebook is “one gigantic information-gathering business that’s decided to take all of the data that people didn’t know was valuable and sell it and make f***ing billions of dollars.” Musk appeared to agree.

The animosity exhibited toward data collection might come as a surprise to anyone who has taken Econ 101. Goods ideally end up with those who value them most. A firm finding profitable ways to repurpose unwanted scraps is just the efficient reallocation of resources. This applies as much to personal data as to literal trash.

Unfortunately, in the policy sphere, few are willing to recognize the inherent trade-off between the value of privacy, on the one hand, and the value of various goods and services that rely on consumer data, on the other. Ideally, policymakers would look to markets to find the right balance, which they often can. When the transfer of data is hardwired into an underlying transaction, parties have ample room to bargain.

But this is not always possible. In some cases, transaction costs will prevent parties from bargaining over the use of data. The question is whether such situations are so widespread as to justify the creation of data property rights, with all of the allocative inefficiencies they entail. Critics wrongly assume the solution is both to create data property rights and to allocate them to consumers. But there is no evidence to suggest that, at the margin, heightened user privacy necessarily outweighs the social benefits that new data-reliant goods and services would generate. Recent experience in the worlds of personalized medicine and the fight against COVID-19 help to illustrate this point.

Data Property Rights and Personalized Medicine

The world is on the cusp of a revolution in personalized medicine. Advances such as the improved identification of biomarkers, CRISPR genome editing, and machine learning, could usher a new wave of treatments to markedly improve health outcomes.

Personalized medicine uses information about a person’s own genes or proteins to prevent, diagnose, or treat disease. Genetic-testing companies like 23andMe or Family Tree DNA, with the large troves of genetic information they collect, could play a significant role in helping the scientific community to further medical progress in this area.

However, despite the obvious potential of personalized medicine, many of its real-world applications are still very much hypothetical. While governments could act in any number of ways to accelerate the movement’s progress, recent policy debates have instead focused more on whether to create a system of property rights covering personal genetic data.

Some raise concerns that it is pharmaceutical companies, not consumers, who will reap the monetary benefits of the personalized medicine revolution, and that advances are achieved at the expense of consumers’ and patients’ privacy. They contend that data property rights would ensure that patients earn their “fair” share of personalized medicine’s future profits.

But it’s worth examining the other side of the coin. There are few things people value more than their health. U.S. governmental agencies place the value of a single life at somewhere between $1 million and $10 million. The commonly used quality-adjusted life year metric offers valuations that range from $50,000 to upward of $300,000 per incremental year of life.

It therefore follows that the trivial sums users of genetic-testing kits might derive from a system of data property rights would likely be dwarfed by the value they would enjoy from improved medical treatments. A strong case can be made that policymakers should prioritize advancing the emergence of new treatments, rather than attempting to ensure that consumers share in the profits generated by those potential advances.

These debates drew increased attention last year, when 23andMe signed a strategic agreement with the pharmaceutical company Almirall to license the rights related to an antibody Almirall had developed. Critics pointed out that 23andMe’s customers, whose data had presumably been used to discover the potential treatment, received no monetary benefits from the deal. Journalist Laura Spinney wrote in The Guardian newspaper:

23andMe, for example, asks its customers to waive all claims to a share of the profits arising from such research. But given those profits could be substantial—as evidenced by the interest of big pharma—shouldn’t the company be paying us for our data, rather than charging us to be tested?

In the deal’s wake, some argued that personal health data should be covered by property rights. A cardiologist quoted in Fortune magazine opined: “I strongly believe that everyone should own their medical data—and they have a right to that.” But this strong belief, however widely shared, ignores important lessons that law and economics has to teach about property rights and the role of contractual freedom.

Why Do We Have Property Rights?

Among the many important features of property rights is that they create “excludability,” the ability of economic agents to prevent third parties from using a given item. In the words of law professor Richard Epstein:

[P]roperty is not an individual conception, but is at root a social conception. The social conception is fairly and accurately portrayed, not by what it is I can do with the thing in question, but by who it is that I am entitled to exclude by virtue of my right. Possession becomes exclusive possession against the rest of the world…

Excludability helps to facilitate the trade of goods, offers incentives to create those goods in the first place, and promotes specialization throughout the economy. In short, property rights create a system of exclusion that supports creating and maintaining valuable goods, services, and ideas.

But property rights are not without drawbacks. Physical or intellectual property can lead to a suboptimal allocation of resources, namely market power (though this effect is often outweighed by increased ex ante incentives to create and innovate). Similarly, property rights can give rise to thickets that significantly increase the cost of amassing complementary pieces of property. Often cited are the historic (but contested) examples of tolling on the Rhine River or the airplane patent thicket of the early 20th century. Finally, strong property rights might also lead to holdout behavior, which can be addressed through top-down tools, like eminent domain, or private mechanisms, like contingent contracts.

In short, though property rights—whether they cover physical or information goods—can offer vast benefits, there are cases where they might be counterproductive. This is probably why, throughout history, property laws have evolved to achieve a reasonable balance between incentives to create goods and to ensure their efficient allocation and use.

Personal Health Data: What Are We Trying to Incentivize?

There are at least three critical questions we should ask about proposals to create property rights over personal health data.

  1. What goods or behaviors would these rights incentivize or disincentivize that are currently over- or undersupplied by the market?
  2. Are goods over- or undersupplied because of insufficient excludability?
  3. Could these rights undermine the efficient use of personal health data?

Much of the current debate centers on data obtained from direct-to-consumer genetic-testing kits. In this context, almost by definition, firms only obtain consumers’ genetic data with their consent. In western democracies, the rights to bodily integrity and to privacy generally make it illegal to administer genetic tests against a consumer or patient’s will. This makes genetic information naturally excludable, so consumers already benefit from what is effectively a property right.

When consumers decide to use a genetic-testing kit, the terms set by the testing firm generally stipulate how their personal data will be used. 23andMe has a detailed policy to this effect, as does Family Tree DNA. In the case of 23andMe, consumers can decide whether their personal information can be used for the purpose of scientific research:

You have the choice to participate in 23andMe Research by providing your consent. … 23andMe Research may study a specific group or population, identify potential areas or targets for therapeutics development, conduct or support the development of drugs, diagnostics or devices to diagnose, predict or treat medical or other health conditions, work with public, private and/or nonprofit entities on genetic research initiatives, or otherwise create, commercialize, and apply this new knowledge to improve health care.

Because this transfer of personal information is hardwired into the provision of genetic-testing services, there is space for contractual bargaining over the allocation of this information. The right to use personal health data will go toward the party that values it most, especially if information asymmetries are weeded out by existing regulations or business practices.

Regardless of data property rights, consumers have a choice: they can purchase genetic-testing services and agree to the provider’s data policy, or they can forgo the services. The service provider cannot obtain the data without entering into an agreement with the consumer. While competition between providers will affect parties’ bargaining positions, and thus the price and terms on which these services are provided, data property rights likely will not.

So, why do consumers transfer control over their genetic data? The main reason is that genetic information is inaccessible and worthless without the addition of genetic-testing services. Consumers must pass through the bottleneck of genetic testing for their genetic data to be revealed and transformed into usable information. It therefore makes sense to transfer the information to the service provider, who is in a much stronger position to draw insights from it. From the consumer’s perspective, the data is not even truly “transferred,” as the consumer had no access to it before the genetic-testing service revealed it. The value of this genetic information is then netted out in the price consumers pay for testing kits.

If personal health data were undersupplied by consumers and patients, testing firms could sweeten the deal and offer them more in return for their data. U.S. copyright law covers original compilations of data, while EU law gives 15 years of exclusive protection to the creators of original databases. Legal protections for trade secrets could also play some role. Thus, firms have some incentives to amass valuable health datasets.

But some critics argue that health data is, in fact, oversupplied. Generally, such arguments assert that agents do not account for the negative privacy externalities suffered by third-parties, such as adverse-selection problems in insurance markets. For example, Jay Pil Choi, Doh Shin Jeon, and Byung Cheol Kim argue:

Genetic tests are another example of privacy concerns due to informational externalities. Researchers have found that some subjects’ genetic information can be used to make predictions of others’ genetic disposition among the same racial or ethnic category.  … Because of practical concerns about privacy and/or invidious discrimination based on genetic information, the U.S. federal government has prohibited insurance companies and employers from any misuse of information from genetic tests under the Genetic Information Nondiscrimination Act (GINA).

But if these externalities exist (most of the examples cited by scholars are hypothetical), they are likely dwarfed by the tremendous benefits that could flow from the use of personal health data. Put differently, the assertion that “excessive” data collection may create privacy harms should be weighed against the possibility that the same collection may also lead to socially valuable goods and services that produce positive externalities.

In any case, data property rights would do little to limit these potential negative externalities. Consumers and patients are already free to agree to terms that allow or prevent their data from being resold to insurers. It is not clear how data property rights would alter the picture.

Proponents of data property rights often claim they should be associated with some form of collective bargaining. The idea is that consumers might otherwise fail to receive their “fair share” of genetic-testing firms’ revenue. But what critics portray as asymmetric bargaining power might simply be the market signaling that genetic-testing services are in high demand, with room for competitors to enter the market. Shifting rents from genetic-testing services to consumers would undermine this valuable price signal and, ultimately, diminish the quality of the services.

Perhaps more importantly, to the extent that they limit the supply of genetic information—for example, because firms are forced to pay higher prices for data and thus acquire less of it—data property rights might hinder the emergence of new treatments. If genetic data is a key input to develop personalized medicines, adopting policies that, in effect, ration the supply of that data is likely misguided.

Even if policymakers do not directly put their thumb on the scale, data property rights could still harm pharmaceutical innovation. If existing privacy regulations are any guide—notably, the previously mentioned GDPR and CCPA, as well as the federal Health Insurance Portability and Accountability Act (HIPAA)—such rights might increase red tape for pharmaceutical innovators. Privacy regulations routinely limit firms’ ability to put collected data to new and previously unforeseen uses. They also limit parties’ contractual freedom when it comes to gathering consumers’ consent.

At the margin, data property rights would make it more costly for firms to amass socially valuable datasets. This would effectively move the personalized medicine space further away from a world of permissionless innovation, thus slowing down medical progress.

In short, there is little reason to believe health-care data is misallocated. Proposals to reallocate rights to such data based on idiosyncratic distributional preferences threaten to stifle innovation in the name of privacy harms that remain mostly hypothetical.

Data Property Rights and COVID-19

The trade-off between users’ privacy and the efficient use of data also has important implications for the fight against COVID-19. Since the beginning of the pandemic, several promising initiatives have been thwarted by privacy regulations and concerns about the use of personal data. This has potentially prevented policymakers, firms, and consumers from putting information to its optimal social use. High-profile issues have included:

Each of these cases may involve genuine privacy risks. But to the extent that they do, those risks must be balanced against the potential benefits to society. If privacy concerns prevent us from deploying contact tracing or green passes at scale, we should question whether the privacy benefits are worth the cost. The same is true for rules that prohibit amassing more data than is strictly necessary, as is required by data-minimization obligations included in regulations such as the GDPR.

If our initial question was instead whether the benefits of a given data-collection scheme outweighed its potential costs to privacy, incentives could be set such that competition between firms would reduce the amount of data collected—at least, where minimized data collection is, indeed, valuable to users. Yet these considerations are almost completely absent in the COVID-19-related privacy debates, as they are in the broader privacy debate. Against this backdrop, the case for personal data property rights is dubious.

Conclusion

The key question is whether policymakers should make it easier or harder for firms and public bodies to amass large sets of personal data. This requires asking whether personal data is currently under- or over-provided, and whether the additional excludability that would be created by data property rights would offset their detrimental effect on innovation.

Swaths of personal data currently lie untapped. With the proper incentive mechanisms in place, this idle data could be mobilized to develop personalized medicines and to fight the COVID-19 outbreak, among many other valuable uses. By making such data more onerous to acquire, property rights in personal data might stifle the assembly of novel datasets that could be used to build innovative products and services.

On the other hand, when dealing with diffuse and complementary data sources, transaction costs become a real issue and the initial allocation of rights can matter a great deal. In such cases, unlike the genetic-testing kits example, it is not certain that users will be able to bargain with firms, especially where their personal information is exchanged by third parties.

If optimal reallocation is unlikely, should property rights go to the person covered by the data or to the collectors (potentially subject to user opt-outs)? Proponents of data property rights assume the first option is superior. But if the goal is to produce groundbreaking new goods and services, granting rights to data collectors might be a superior solution. Ultimately, this is an empirical question.

As Richard Epstein puts it, the goal is to “minimize the sum of errors that arise from expropriation and undercompensation, where the two are inversely related.” Rather than approach the problem with the preconceived notion that initial rights should go to users, policymakers should ensure that data flows to those economic agents who can best extract information and knowledge from it.

As things stand, there is little to suggest that the trade-offs favor creating data property rights. This is not an argument for requisitioning personal information or preventing parties from transferring data as they see fit, but simply for letting markets function, unfettered by misguided public policies.

The European Commission has unveiled draft legislation (the Digital Services Act, or “DSA”) that would overhaul the rules governing the online lives of its citizens. The draft rules are something of a mixed bag. While online markets present important challenges for law enforcement, the DSA would significantly increase the cost of doing business in Europe and harm the very freedoms European lawmakers seek to protect. The draft’s newly proposed “Know Your Business Customer” (KYBC) obligations, however, will enable smoother operation of the liability regimes that currently apply to online intermediaries. 

These reforms come amid a rash of headlines about election meddling, misinformation, terrorist propaganda, child pornography, and other illegal and abhorrent content spread on digital platforms. These developments have galvanized debate about online liability rules.

Existing rules, codified in the e-Commerce Directive, largely absolve “passive” intermediaries that “play a neutral, merely technical and passive role” from liability for content posted by their users so long as they remove it once notified. “Active” intermediaries have more legal exposure. This regime isn’t perfect, but it seems to have served the EU well in many ways.

With its draft regulation, the European Commission is effectively arguing that those rules fail to address the legal challenges posed by the emergence of digital platforms. As the EC’s press release puts it:

The landscape of digital services is significantly different today from 20 years ago, when the eCommerce Directive was adopted. […]  Online intermediaries […] can be used as a vehicle for disseminating illegal content, or selling illegal goods or services online. Some very large players have emerged as quasi-public spaces for information sharing and online trade. They have become systemic in nature and pose particular risks for users’ rights, information flows and public participation.

Online platforms initially hoped lawmakers would agree to some form of self-regulation, but those hopes were quickly dashed. Facebook released a white paper this Spring proposing a more moderate path that would expand regulatory oversight to “ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression.” The proposed regime would not impose additional liability for harmful content posted by users, a position that Facebook and other internet platforms reiterated during congressional hearings in the United States.

European lawmakers were not moved by these arguments. EU Commissioner for Internal Market and Services Thierry Breton, among other European officials, dismissed Facebook’s proposal within hours of its publication, saying:

It’s not enough. It’s too slow, it’s too low in terms of responsibility and regulation.

Against this backdrop, the draft DSA includes many far-reaching measures: transparency requirements for recommender systems, content moderation decisions, and online advertising; mandated sharing of data with authorities and researchers; and numerous compliance measures that include internal audits and regular communication with authorities. Moreover, the largest online platforms—so-called “gatekeepers”—will have to comply with a separate regulation that gives European authorities new tools to “protect competition” in digital markets (the Digital Markets Act, or “DMA”).

The upshot is that, if passed into law, the draft rules will place tremendous burdens upon online intermediaries. This would be self-defeating. 

Excessive regulation or liability would significantly increase their cost of doing business, leading to significantly smaller networks and significantly increased barriers to access for many users. Stronger liability rules would also encourage platforms to play it safe, such as by quickly de-platforming and refusing access to anyone who plausibly engaged in illegal activity. Such an outcome would harm the very freedoms European lawmakers seek to protect.

This could prove particularly troublesome for small businesses that find it harder to compete against large platforms due to rising compliance costs. In effect, the new rules will increase barriers to entry, as has already been seen with the GDPR.

In the commission’s defense, some of the proposed reforms are more appealing. This is notably the case with the KYBC requirements, as well as the decision to leave most enforcement to member states, where services providers have their main establishments. The latter is likely to preserve regulatory competition among EU members to attract large tech firms, potentially limiting regulatory overreach. 

Indeed, while the existing regime does, to some extent, curb the spread of online crime, it does little for the victims of cybercrime, who ultimately pay the price. Removing illegal content doesn’t prevent it from reappearing in the future, sometimes on the same platform. Importantly, hosts have no obligation to provide the identity of violators to authorities, or even to know their identity in the first place. The result is an endless game of “whack-a-mole”: illegal content is taken down, but immediately reappears elsewhere. This status quo enables malicious users to upload illegal content, such as that which recently led card networks to cut all ties with Pornhub

Victims arguably need additional tools. This is what the Commission seeks to achieve with the DSA’s “traceability of traders” requirement, a form of KYBC:

Where an online platform allows consumers to conclude distance contracts with traders, it shall ensure that traders can only use its services to promote messages on or to offer products or services to consumers located in the Union if, prior to the use of its services, the online platform has obtained the following information: […]

Instead of rewriting the underlying liability regime—with the harmful unintended consequences that would likely entail—the draft DSA creates parallel rules that require platforms to better protect victims.

Under the proposed rules, intermediaries would be required to obtain the true identity of commercial clients (as opposed to consumers) and to sever ties with businesses that refuse to comply (rather than just take down their content). Such obligations would be, in effect, a version of the “Know Your Customer” regulations that exist in other industries. Banks, for example, are required to conduct due diligence to ensure scofflaws can’t use legitimate financial services to further criminal enterprises. It seems reasonable to expect analogous due diligence from the Internet firms that power so much of today’s online economy.

Obligations requiring platforms to vet their commercial relationships may seem modest, but they’re likely to enable more effective law enforcement against the actual perpetrators of online harms without diminishing platform’s innovation and the economic opportunity they provide (and that everyone agrees is worth preserving).

There is no silver bullet. Illegal activity will never disappear entirely from the online world, just as it has declined, but not vanished, from other walks of life. But small regulatory changes that offer marginal improvements can have a substantial effect. Modest informational requirements would weed out the most blatant crimes without overly burdening online intermediaries. In short, it would make the Internet a safer place for European citizens.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Christine S. Wilson (Commissioner of the U.S. Federal Trade Commission).[1] The views expressed here are the author’s and do not necessarily reflect those of the Federal Trade Commission or any other Commissioner.]  

I type these words while subject to a stay-at-home order issued by West Virginia Governor James C. Justice II. “To preserve public health and safety, and to ensure the healthcare system in West Virginia is capable of serving all citizens in need,” I am permitted to leave my home only for a limited and precisely enumerated set of reasons. Billions of citizens around the globe are now operating under similar shelter-in-place directives as governments grapple with how to stem the tide of infection, illness and death inflicted by the global Covid-19 pandemic. Indeed, the first response of many governments has been to impose severe limitations on physical movement to contain the spread of the novel coronavirus. The second response contemplated by many, and the one on which this blog post focuses, involves the extensive collection and analysis of data in connection with people’s movements and health. Some governments are using that data to conduct sophisticated contact tracing, while others are using the power of the state to enforce orders for quarantines and against gatherings.

The desire to use modern technology on a broad scale for the sake of public safety is not unique to this moment. Technology is intended to improve the quality of our lives, in part by enabling us to help ourselves and one another. For example, cell towers broadcast wireless emergency alerts to all mobile devices in the area to warn us of extreme weather and other threats to safety in our vicinity. One well-known type of broadcast is the Amber Alert, which enables community members to assist in recovering an abducted child by providing descriptions of the abductor, the abductee and the abductor’s vehicle. Citizens who spot individuals and vehicles that meet these descriptions can then provide leads to law enforcement authorities. A private nonprofit organization, the National Center for Missing and Exploited Children, coordinates with state and local public safety officials to send out Amber Alerts through privately owned wireless carriers.

The robust civil society and free market in the U.S. make partnerships between the private sector and government agencies commonplace. But some of these arrangements involve a much more extensive sharing of Americans’ personal information with law enforcement than the emergency alert system does.

For example, Amazon’s home security product Ring advertises itself not only as a way to see when a package has been left at your door, but also as a way to make communities safer by turning over video footage to local police departments. In 2018, the company’s pilot program in Newark, New Jersey, donated more than 500 devices to homeowners to install at their homes in two neighborhoods, with a big caveat. Ring recipients were encouraged to share video with police. According to Ring, home burglaries in those neighborhoods fell by more than 50% from April through July 2018 relative to the same time period a year earlier.

Yet members of Congress and privacy experts have raised concerns about these partnerships, which now number in the hundreds. After receiving Amazon’s response to his inquiry, Senator Edward Markey highlighted Ring’s failure to prevent police from sharing video footage with third parties and from keeping the video permanently, and Ring’s lack of precautions to ensure that users collect footage only of adults and of users’ own property. The House of Representatives Subcommittee on Economic and Consumer Policy continues to investigate Ring’s police partnerships and data policies. The Electronic Frontier Foundation has called Ring “a perfect storm of privacy threats,” while the UK surveillance camera commissioner has warned against “a very real power to understand, to surveil you in a way you’ve never been surveilled before.”

Ring demonstrates clearly that it is not new for potential breaches of privacy to be encouraged in the name of public safety; police departments urge citizens to use Ring and share the videos with police to fight crime. But emerging developments indicate that, in the fight against Covid-19, we can expect to see more and more private companies placed in the difficult position of becoming complicit in government overreach.

At least mobile phone users can opt out of receiving Amber Alerts, and residents can refuse to put Ring surveillance systems on their property. The Covid-19 pandemic has made some other technological intrusions effectively impossible to refuse. For example, online proctors who monitor students over webcams to ensure they do not cheat on exams taken at home were once something that students could choose to accept if they did not want to take an exam where and when they could be proctored face to face. With public schools and universities across the U.S. closed for the rest of the semester, students who refuse to give private online proctors access to their webcams – and, consequently, the ability to view their surroundings – cannot take exams at all.

Existing technology and data practices already have made the Federal Trade Commission sensitive to potential consumer privacy and data security abuses. For decades, this independent, bipartisan agency has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. It brought its first privacy and data security cases nearly 20 years ago, while I was Chief of Staff to then-Chairman Timothy J. Muris. The FTC took on Eli Lilly for disclosing the e-mail addresses of 669 subscribers to its Prozac reminder service – many of whom were government officials, and at a time of greater stigma for mental health issues – and Microsoft for (among other things) falsely claiming that its Passport website sign-in service did not collect any personally identifiable information other than that described in its privacy policy.

The privacy and data security practices of healthcare and software companies are likely to impact billions of people during the current coronavirus pandemic. The U.S. already has many laws on the books that are relevant to practices in these areas. One notable example is the Health Insurance Portability and Accountability Act, which set national standards for the protection of individually identifiable health information by health plans, health care clearinghouses and health care providers who accept non-cash payments. While the FTC does not enforce HIPAA, it does enforce the Health Breach Notification Rule, as well as the provisions in the FTC Act used to challenge the privacy missteps of Eli Lilly and many other companies.

But technological developments have created gaps in HIPAA enforcement. For example, HIPAA applies to doctors’ offices, hospitals and insurance companies, but it may not apply to wearables, smartphone apps or websites. Yet sensitive medical information is now commonly stored in places other than health care practitioners’ offices.  Your phone and watch now collect information about your blood sugar, exercise habits, fertility and heart health. 

Observers have pointed to these emerging gaps in coverage as evidence of the growing need for federal privacy legislation. I, too, have called on the U.S. Congress to enact comprehensive federal privacy legislation – not only to address these emerging gaps, but for two other reasons.  First, consumers need clarity regarding the types of data collected from them, and how those data are used and shared. I believe consumers can make informed decisions about which goods and services to patronize when they have the information they need to evaluate the costs and benefits of using those goods. Second, businesses need predictability and certainty regarding the rules of the road, given the emerging patchwork of regimes both at home and abroad.

Rules of the road regarding privacy practices will prove particularly instructive during this global pandemic, as governments lean on the private sector for data on the grounds that the collection and analysis of data can help avert (or at least diminish to some extent) a public health catastrophe. With legal lines in place, companies would be better equipped to determine when they are being asked to cross the line for the public good, and whether they should require a subpoena or inform customers before turning over data. It is regrettable that Congress has been unable to enact federal privacy legislation to guide this discussion.

Understandably, Congress does not have privacy at the top of its agenda at the moment, as the U.S. faces a public health crisis. As I write, more than 579,000 Americans have been diagnosed with Covid-19, and more than 22,000 have perished. Sadly, those numbers will only increase. And the U.S. is not alone in confronting this crisis: governments globally have confronted more than 1.77 million cases and more than 111,000 deaths. For a short time, health and safety issues may take precedence over privacy protections. But some of the initiatives to combat the coronavirus pandemic are worrisome. We are learning more every day about how governments are responding in a rapidly developing situation; what I describe in the next section constitutes merely the tip of the iceberg. These initiatives are worth highlighting here, as are potential safeguards for privacy and civil liberties that societies around the world would be wise to embrace.

Some observers view public/private partnerships based on an extensive use of technology and data as key to fighting the spread of Covid-19. For example, Professor Jane Bambauer calls for contact tracing and alerts “to be done in an automated way with the help of mobile service providers’ geolocation data.” She argues that privacy is merely “an instrumental right” that “is meant to achieve certain social goals in fairness, safety and autonomy. It is not an end in itself.” Given the “more vital” interests in health and the liberty to leave one’s house, Bambauer sees “a moral imperative” for the private sector “to ignore even express lack of consent” by an individual to the sharing of information about him.

This proposition troubles me because the extensive data sharing that has been proposed in some countries, and that is already occurring in many others, is not mundane. In the name of advertising and product improvements, private companies have been hoovering up personal data for years. What this pandemic lays bare, though, is that while this trove of information was collected under the guise of cataloguing your coffee preferences and transportation habits, it can be reprocessed in an instant to restrict your movements, impinge on your freedom of association, and silence your freedom of speech. Bambauer is calling for detailed information about an individual’s every movement to be shared with the government when, in the United States under normal circumstances, a warrant would be required to access this information.

Indeed, with our mobile devices acting as the “invisible policeman” described by Justice William O. Douglas in Berger v. New York, we may face “a bald invasion of privacy, far worse than the general warrants prohibited by the Fourth Amendment.” Backward-looking searches and data hoards pose new questions of what constitutes a “reasonable” search. The stakes are high – both here and abroad, citizens are being asked to allow warrantless searches by the government on an astronomical scale, all in the name of public health.  

Abroad

The first country to confront the coronavirus was China. The World Health Organization has touted the measures taken by China as “the only measures that are currently proven to interrupt or minimize transmission chains in humans.” Among these measures are the “rigorous tracking and quarantine of close contacts,” as well as “the use of big data and artificial intelligence (AI) to strengthen contact tracing and the management of priority populations.” An ambassador for China has said his government “optimized the protocol of case discovery and management in multiple ways like backtracking the cell phone positioning.” Much as the Communist Party’s control over China enabled it to suppress early reports of a novel coronavirus, this regime vigorously ensured its people’s compliance with the “stark” containment measures described by the World Health Organization.

Before the Covid-19 pandemic, Hong Kong already had been testing the use of “smart wristbands” to track the movements of prisoners. The Special Administrative Region now monitors people quarantined inside their homes by requiring them to wear wristbands that send information to the quarantined individuals’ smartphones and alert the Department of Health and Police if people leave their homes, break their wristbands or disconnect them from their smartphones. When first announced in early February, the wristbands were required only for people who had been to Wuhan in the past 14 days, but the program rapidly expanded to encompass every person entering Hong Kong. The government denied any privacy concerns about the electronic wristbands, saying the Privacy Commissioner for Personal Data had been consulted about the technology and agreed it could be used to ensure that quarantined individuals remain at home.

Elsewhere in Asia, Taiwan’s Chunghwa Telecom has developed a system that the local CDC calls an “electronic fence.” Specifically, the government obtains the SIM card identifiers for the mobile devices of quarantined individuals and passes those identifiers to mobile network operators, which use phone signals to their cell towers to alert public health and law enforcement agencies when the phone of a quarantined individual leaves a certain geographic range. In response to privacy concerns, the National Communications Commission said the system was authorized by special laws to prevent the coronavirus, and that it “does not violate personal data or privacy protection.” In Singapore, travelers and others issued Stay-Home Notices to remain in their residency 24 hours a day for 14 days must respond within an hour if contacted by government agencies by phone, text message or WhatsApp. And to assist with contact tracing, the government has encouraged everyone in the country to download TraceTogether, an app that uses Bluetooth to identify other nearby phones with the app and tracks when phones are in close proximity.

Israel’s Ministry of Health has launched an app for mobile devices called HaMagen (the shield) to prevent the spread of coronavirus by identifying contacts between diagnosed patients and people who came into contact with them in the 14 days prior to diagnosis. In March, the prime minister’s cabinet initially bypassed the legislative body to approve emergency regulations for obtaining without a warrant the cellphone location data and additional personal information of those diagnosed with or suspected of coronavirus infection. The government will send text messages to people who came into contact with potentially infected individuals, and will monitor the potentially infected person’s compliance with quarantine. The Ministry of Health will not hold this information; instead, it can make data requests to the police and Shin Bet, the Israel Security Agency. The police will enforce quarantine measures and Shin Bet will track down those who came into contact with the potentially infected.

Multiple Eastern European nations with constitutional protections for citizens’ rights of movement and privacy have superseded them by declaring a state of emergency. For example, in Hungary the declaration of a “state of danger” has enabled Prime Minister Viktor Orbán’s government to engage in “extraordinary emergency measures” without parliamentary consent.  His ministers have cited the possibility that coronavirus will prevent a gathering of a sufficient quorum of members of Parliament as making it necessary for the government to be able to act in the absence of legislative approval.

Member States of the European Union must protect personal data pursuant to the General Data Protection Regulation, and communications data, such as mobile location, pursuant to the ePrivacy Directive. The chair of the European Data Protection Board has observed that the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security. But if those measures allow for the processing of non-anonymized location data from mobile devices, individuals must have safeguards such as a right to a judicial remedy. “Invasive measures, such as the ‘tracking’ of individuals (i.e. processing of historical non-anonymized location data) could be considered proportional under exceptional circumstances and depending on the concrete modalities of the processing.” The EDPB has announced it will prioritize guidance on these issues.

EU Member States are already implementing such public security measures. For example, the government of Poland has by statute required everyone under a quarantine order due to suspected infection to download the “Home Quarantine” smartphone app. Those who do not install and use the app are subject to a fine. The app verifies users’ compliance with quarantine through selfies and GPS data. Users’ personal data will be administered by the Minister of Digitization, who has appointed a data protection officer. Each user’s identification, name, telephone number, quarantine location and quarantine end date can be shared with police and other government agencies. After two weeks, if the user does not report symptoms of Covid-19, the account will be deactivated — but the data will be stored for six years. The Ministry of Digitization claims that it must store the data for six years in case users pursue claims against the government. However, local privacy expert and Panoptykon Foundation cofounder Katarzyna Szymielewicz has questioned this rationale.

Even other countries that are part of the Anglo-American legal tradition are ramping up their use of data and working with the private sector to do so. The UK’s National Health Service is developing a data store that will include online/call center data from NHS Digital and Covid-19 test result data from the public health agency. While the NHS is working with private partner organizations and companies including Microsoft, Palantir Technologies, Amazon Web Services and Google, it has promised to keep all the data under its control, and to require those partners to destroy or return the data “once the public health emergency situation has ended.” The NHS also has committed to meet the requirements of data protection legislation by ensuring that individuals cannot be re-identified from the data in the data store.

Notably, each of the companies partnering with the NHS at one time or another has been subjected to scrutiny for its privacy practices. Some observers have noted that tech companies, which have been roundly criticized for a variety of reasons in recent years, may seek to use this pandemic for “reputation laundering.” As one observer cautioned: “Reputations matter, and there’s no reason the government or citizens should cast bad reputations aside when choosing who to work with or what to share” during this public health crisis.

At home

In the U.S., the federal government last enforced large-scale isolation and quarantine measures during the influenza (“Spanish Flu”) pandemic a century ago. But the Centers for Disease Control and Prevention track diseases on a daily basis by receiving case notifications from every state. The states mandate that healthcare providers and laboratories report certain diseases to the local public health authorities using personal identifiers. In other words, if you test positive for coronavirus, the government will know. Every state has laws authorizing quarantine and isolation, usually through the state’s health authority, while the CDC has authority through the federal Public Health Service Act and a series of presidential executive orders to exercise quarantine and isolation powers for specific diseases, including severe acute respiratory syndromes (a category into which the novel coronavirus falls).

Now local governments are issuing orders that empower law enforcement to fine and jail Americans for failing to practice social distancing. State and local governments have begun arresting and charging people who violate orders against congregating in groups. Rhode Island is requiring every non-resident who enters the state to be quarantined for two weeks, with police checks at the state’s transportation hubs and borders.

How governments discover violations of quarantine and social distancing orders will raise privacy concerns. Police have long been able to enforce based on direct observation of violations. But if law enforcement authorities identify violations of such orders based on data collection rather than direct observation, the Fourth Amendment may be implicated. In Jones and Carpenter, the Supreme Court has limited the warrantless tracking of Americans through GPS devices placed on their cars and through cellphone data. But building on the longstanding practice of contact tracing in fighting infectious diseases such as tuberculosis, GPS data has proven helpful in fighting the spread of Covid-19. This same data, though, also could be used to piece together evidence of violations of stay-at-home orders. As Chief Justice John Roberts wrote in Carpenter, “With access to [cell-site location information], the government can now travel back in time to retrace a person’s whereabouts… Whoever the suspect turns out to be, he has effectively been tailed every moment of every day for five years.”

The Fourth Amendment protects American citizens from government action, but the “reasonable expectation of privacy” test applied in Fourth Amendment cases connects the arenas of government action and commercial data collection. As Professor Paul Ohm of the Georgetown University Law Center notes, “the dramatic expansion of technologically-fueled corporate surveillance of our private lives automatically expands police surveillance too, thanks to the way the Supreme Court has construed the reasonable expectation of privacy test and the third-party doctrine.”

For example, the COVID-19 Mobility Data Network – infectious disease epidemiologists working with Facebook, Camber Systems and Cubiq – uses mobile device data to inform state and local governments about whether social distancing orders are effective. The tech companies give the researchers aggregated data sets; the researchers give daily situation reports to departments of health, but say they do not share the underlying data sets with governments. The researchers have justified this model based on users of the private companies’ apps having consented to the collection and sharing of data.

However, the assumption that consumers have given informed consent to the collection of their data (particularly for the purpose of monitoring their compliance with social isolation measures during a pandemic) is undermined by studies showing the average consumer does not understand all the different types of data that are collected and how their information is analyzed and shared with third parties – including governments. Technology and telecommunications companies have neither asked me to opt into tracking for public health nor made clear how they are partnering with federal, state and local governments. This practice highlights that data will be divulged in ways consumers cannot imagine – because no one assumed a pandemic when agreeing to a company’s privacy policy. This information asymmetry is part of why we need federal privacy legislation.

On Friday afternoon, Apple and Google announced their opt-in Covid-19 contact tracing technology. The owners of the two most common mobile phone operating systems in the U.S. said that in May they would release application programming interfaces that enable interoperability between iOS and Android devices using official contact tracing apps from public health authorities. At an unspecified date, Bluetooth-based contact tracing will be built directly into the operating systems. “Privacy, transparency, and consent are of utmost importance in this effort,” the companies said in their press release.  

At this early stage, we do not yet know exactly how the proposed Google/Apple contact tracing system will operate. It sounds similar to Singapore’s TraceTogether, which is already available in the iOS and Android mobile app stores (it has a 3.3 out of 5 average rating in the former and a 4.0 out of 5 in the latter). TraceTogether is also described as a voluntary, Bluetooth-based system that avoids GPS location data, does not upload information without the user’s consent, and uses changing, encrypted identifiers to maintain user anonymity. Perhaps the most striking difference, at least to a non-technical observer, is that TraceTogether was developed and is run by the Singaporean government, which has been a point of concern for some observers. The U.S. version – like finding abducted children through Amber Alerts and fighting crime via Amazon Ring – will be a partnership between the public and private sectors.     

Recommendations

The global pandemic we now face is driving data usage in ways not contemplated by consumers. Entities in the private and public sector are confronting new and complex choices about data collection, usage and sharing. Organizations with Chief Privacy Officers, Chief Information Security Officers, and other personnel tasked with managing privacy programs are, relatively speaking, well-equipped to address these issues. Despite the extraordinary circumstances, senior management should continue to rely on the expertise and sound counsel of their CPOs and CISOs, who should continue to make decisions based on their established privacy and data security programs. Although developments are unfolding at warp speed, it is important – arguably now, more than ever – to be intentional about privacy decisions.

For organizations that lack experience with privacy and data security programs (and individuals tasked with oversight for these areas), now is a great time to pause, do some research and exercise care. It is essential to think about the longer-term ramifications of choices made about data collection, use and sharing during the pandemic. The FTC offers easily accessible resources, including Protecting Personal Information: A Guide for Business, Start with Security: A Guide for Business, and Stick with Security: A Business Blog Series. While the Gramm-Leach-Bliley Act (GLB) applies only to financial institutions, the FTC’s GLB compliance blog outlines some data security best practices that apply more broadly. The National Institute for Standards and Technology (NIST) also offers security and privacy resources, including a privacy framework to help organizations identify and manage privacy risks. Private organizations such as the Center for Information Policy Leadership, the International Association of Privacy Professionals and the App Association also offer helpful resources, as do trade associations. While it may seem like a suboptimal time to take a step back and focus on these strategic issues, remember that privacy and data security missteps can cause irrevocable harm. Counterintuitively, now is actually the best time to be intentional about choices in these areas.

Best practices like accountability, risk assessment and risk management will be key to navigating today’s challenges. Companies should take the time to assess and document the new and/or expanded risks from the data collection, use and sharing of personal information. It is appropriate for these risk assessments to incorporate potential benefits and harms not only to the individual and the company, but for society as a whole. Upfront assessments can help companies establish controls and incentives to facilitate responsible behavior, as well as help organizations demonstrate that they are fully aware of the impact of their choices (risk assessment) and in control of their impact on people and programs (risk mitigation). Written assessments can also facilitate transparency with stakeholders, raise awareness internally about policy choices and assist companies with ongoing monitoring and enforcement. Moreover, these assessments will facilitate a return to “normal” data practices when the crisis has passed.  

In a similar vein, companies must engage in comprehensive vendor management with respect to the entities that are proposing to use and analyze their data. In addition to vetting proposed data recipients thoroughly, companies must be selective concerning the categories of information shared. The benefits of the proposed research must be balanced against individual protections, and companies should share only those data necessary to achieve the stated goals. To the extent feasible, data should be shared in de-identified and aggregated formats and data recipients should be subject to contractual obligations prohibiting them from re-identification. Moreover, companies must have policies in place to ensure compliance with research contracts, including data deletion obligations and prohibitions on data re-identification, where appropriate. Finally, companies must implement mechanisms to monitor third party compliance with contractual obligations.

Similar principles of necessity and proportionality should guide governments as they make demands or requests for information from the private sector. Governments must recognize the weight with which they speak during this crisis and carefully balance data collection and usage with civil liberties. In addition, governments also have special obligations to ensure that any data collection done by them or at their behest is driven by the science of Covid-19; to be transparent with citizens about the use of data; and to provide due process for those who wish to challenge limitations on their rights. Finally, government actors should apply good data hygiene, including regularly reassessing the breadth of their data collection initiatives and incorporating data retention and deletion policies. 

In theory, government’s role could be reduced as market-driven responses emerge. For example, assuming the existence of universally accessible daily coronavirus testing with accurate results even during the incubation period, Hal Singer’s proposal for self-certification of non-infection among private actors is intriguing. Thom Lambert identified the inability to know who is infected as a “lemon problem;” Singer seeks a way for strangers to verify each other’s “quality” in the form of non-infection.

Whatever solutions we may accept in a pandemic, it is imperative to monitor the coronavirus situation as it improves, to know when to lift the more dire measures. Former Food and Drug Administration Commissioner Scott Gottlieb and other observers have called for maintaining surveillance because of concerns about a resurgence of the virus later this year. For any measures that conflict with Americans’ constitutional rights to privacy and freedom of movement, there should be metrics set in advance for the conditions that will indicate when such measures are no longer justified. In the absence of pre-determined metrics, governments may feel the same temptation as Hungary’s prime minister to keep renewing a “state of danger” that overrides citizens’ rights. As Slovak lawmaker Tomas Valasek has said, “It doesn’t just take the despots and the illiberals of this world, like Orbán, to wreak damage.” But privacy is not merely instrumental to other interests, and we do not have to sacrifice our right to it indefinitely in exchange for safety.

I recognize that halting the spread of the virus will require extensive and sustained effort, and I credit many governments with good intentions in attempting to save the lives of their citizens. But I refuse to accept that we must sacrifice privacy to reopen the economy. It seems a false choice to say that I must sacrifice my Constitutional rights to privacy, freedom of association and free exercise of religion for another’s freedom of movement. Society should demand that equity, fairness and autonomy be respected in data uses, even in a pandemic. To quote Valasek again: “We need to make sure that we don’t go a single inch further than absolutely necessary in curtailing civil liberties in the name of fighting for public health.” History has taught us repeatedly that sweeping security powers granted to governments during an emergency persist long after the crisis has abated. To resist the gathering momentum toward this outcome, I will continue to emphasize the FTC’s learning on appropriate data collection and use. But my remit as an FTC Commissioner is even broader – when I was sworn in on Sept. 26, 2018, I took an oath to “support and defend the Constitution of the United States” – and so I shall.


[1] Many thanks to my Attorney Advisors Pallavi Guniganti and Nina Frant for their invaluable assistance in preparing this article.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Jane Bambauer, (Professor of Law, University of Arizona James E. Rogers College of Law]

The importance of testing and contact tracing to slow the spread of the novel coronavirus and resume normal life is now well established. The difference between the communities that do it and the ones that don’t is disturbingly grim (see, e.g., South Korea versus Italy). In a large population like the U.S., contact tracing and alerts will have to be done in an automated way with the help of mobile service providers’ geolocation data. The intensive use of data in South Korea has led many commenters to claim that the strategy that’s been so effective there cannot be replicated in western countries with strong privacy laws.

Descriptively, it’s probably true that privacy law and instincts in the US and EU will hinder virus surveillance.

The European Commission’s recent guidance on GDPR’s application to the COVID-19 crisis left a hurdle for member states. EU countries would have to introduce new legislation in order to use telecommunications data to do contact tracing, and the legislation would be reviewable by the European Court of Human Rights. No member states have done this, even though nearly all of them have instituted lock-down measures. 

Even Germany, which has announced the rollout of a cellphone tracking and alert app has decided to make the use of the app voluntary. This system will only be effective if enough people opt into it. (One study suggests the minimum participation rate would have to be “near universal,” so this does not bode well.)

And in the U.S., privacy advocacy groups like EPIC are already gearing up to challenge the collection of cellphone data by federal and state governments based on recent Fourth Amendment precedent finding that individuals have a reasonable expectation of privacy in cell phone location data.

And nearly every opinion piece I read from public health experts promoting contact tracing ends with some obligatory handwringing about the privacy and ethical implications. Research universities and units of government that are comfortable advocating for draconian measures of social distancing and isolation find it necessary to stall and consult their IRBs and privacy officers before pursuing options that involve data surveillance.

While ethicists and privacy scholars certainly have something to teach regulators during a pandemic, the Coronavirus has something to teach us in return. It has thrown harsh light on the drawbacks and absurdities of rigid individual control over personal data.

Objections to surveillance lose their moral and logical bearings when the alternatives are out-of-control disease or mass lockdowns. Compared to those, mass surveillance is the most liberty-preserving option. Thus, instead of reflexively trotting out privacy and ethics arguments, we should take the opportunity to understand the order of operations—to know which rights and liberties are more vital than privacy so that we know when and why expectations in privacy need to bend. All but the most privacy-sensitive would count health and the liberty to leave one’s house among the most basic human interests, so the COVID-19 lockdowns are testing some of the practices and assumptions that are baked into our privacy laws.

At the highest level of abstraction, the pandemic should remind us that privacy is, ultimately, an instrumental right. It is meant to achieve certain social goals in fairness, safety, and autonomy. It is not an end in itself.  

When privacy is cloaked in the language of fundamental human rights, its instrumental function is obscured. Like other liberties in movement and commerce, conceiving of privacy as something that is under each individual’s control is a useful rule-of-thumb when it doesn’t conflict too much with other people’s interests. But the COVID-19 crisis shows that there are circumstances under which privacy as an individual right frustrates the very values in fairness, autonomy, and physical security that it is supposed to support. Privacy authorities and experts at every level need to be as clear and blunt as the experts supporting mass lockdowns: the government can do this, it will have to rely on industry, and we will work through the fallout and secondary problems when people stop dying.

At a minimum epidemiologists and cellphone service providers should be able to rely on implied consent to data-sharing, just as the tort system allows doctors to presume consent for emergency surgery when a patient’s wishes cannot be observed in time. Geoffrey Manne suggested this in an earlier TOTM post about the allocation of information and medical resources:

But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.

Indeed, we should go further than this. There is a moral imperative to ignore even express lack of consent when withholding important information that puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy (e.g. New York’s requirement that doctors notify their patient’s partners about a positive HIV test if their patient fails to do so), this same logic applies at scale to the collection and analysis of data during a pandemic.

Another reason consent is inappropriate at this time is that it mars quantitative studies with selection bias. Medical reporting on the transmission and mortality of COVID-19 has had to rely much too heavily on data coming out of the Diamond Princess cruise ship because for a long time it was the only random sample—the only time that everybody was screened. 

The United States has done a particularly poor job tracking the spread of the virus because faced with a shortage of tests, the CDC compounded our problems by denying those tests to anybody that didn’t meet specific criteria (a set of symptoms and either recent travel or known exposure to a confirmed case.) These criteria all but guaranteed that our data would suggest coughs and fevers are necessary conditions for coronavirus, and it delayed our recognition of community spread. If we are able to do antibody testing in the near future to understand who has had the virus in the past, that data would be most useful over swaths of people who have not self-selected into a testing facility.

If consent is not an appropriate concept for privacy during a pandemic, might there be a defect in its theory even outside of crisis time? I have argued in the past that privacy should be understood as a collective interest in risk management, like negligence law, rather than a property-style right. The public health response to COVID-19 helps illustrate why this is so. The right to privacy is different from other liberties because it directly conflicts with another fundamental right: namely, the right to access information and knowledge. One person’s objection to contact tracing (or any other collection and distribution of data) necessarily conflicts with another’s interest in knowing who was in that person’s proximity during a critical period.

This puts privacy on very different footing from other rights, like the right to free movement. Generally, my right to travel in public space does not have to interfere with other people’s rights. It may interfere if, for example, I drive on the wrong side of the street, but the conflict is not inevitable. With a few restrictions and rules of coordination, there is ample opportunity for people to enjoy public spaces the way they want without forcing policymakers to decide between competing uses. Thus, when we suspend the right to free movement in unusual times like today, when one person’s movement in public space does cause significant detriment to others, we can have confidence that the liberty can be restored when the threat has subsided.

Privacy, by contrast, is inevitably at odds with a demonstrable desire by another person or firm to access information that they find valuable. Perhaps this is the reason that ethicists and regulators find it difficult to overcome privacy objections: when public health experts insist that privacy is conflicting with valuable information flows, a privacy advocate can say “yes, exactly.”

We can improve on the theoretical underpinnings of privacy law by embracing the fact that privacy is instrumental—a means (sometimes an effective one) to achieve other ends. If we are trying to achieve certain goals through its use—goals in equity, fairness, and autonomy—we should increase our effort to understand what types of uses of data implicate those outcomes. Fortunately, that work is already advancing at a fast clip in debates about socially responsible AI.The next step would be to assess whether individual control tends to support the good uses and reduce the bad uses. If our policies can ensure that machine learning applications are sufficiently “fair,” and if we can agree on what fairness entails, lawmakers can begin the fruitful and necessary work of shifting privacy law away from prohibitions on data collection and sharing and toward limits on its use in the areas where individual control is counter-productive.

Source: KC Green

GDPR is officially one year old. How have the first 12 months gone? As you can see from the mix of data and anecdotes below, it appears that compliance costs have been astronomical; individual “data rights” have led to unintended consequences; “privacy protection” seems to have undermined market competition; and there have been large unseen — but not unmeasurable! — costs in forgone startup investment. So, all-in-all, about what we expected.

GDPR cases and fines

Here is the latest data on cases and fines released by the European Data Protection Board:

  • €55,955,871 in fines
    • €50 million of which was a single fine on Google
  • 281,088 total cases
    • 144,376 complaints
    • 89,271 data breach notifications
    • 47,441 other
  • 37.0% ongoing
  • 62.9% closed
  • 0.1% appealed

Unintended consequences of new data privacy rights

GDPR can be thought of as a privacy “bill of rights.” Many of these new rights have come with unintended consequences. If your account gets hacked, the hacker can use the right of access to get all of your data. The right to be forgotten is in conflict with the public’s right to know a bad actor’s history (and many of them are using the right to memory hole their misdeeds). The right to data portability creates another attack vector for hackers to exploit. And the right to opt-out of data collection creates a free-rider problem where users who opt-in subsidize the privacy of those who opt-out.

Article 15: Right of access

  • “Amazon sent 1,700 Alexa voice recordings to the wrong user following data request” [The Verge / Nick Statt]
  • “Today I discovered an unfortunate consequence of GDPR: once someone hacks into your account, they can request-—and potentially access—all of your data. Whoever hacked into my Spotify account got all of my streaming, song, etc. history simply by requesting it.” [Jean Yang]

Article 17: Right to be forgotten

  • “Since 2016, newspapers in Belgium and Italy have removed articles from their archives under [GDPR]. Google was also ordered last year to stop listing some search results, including information from 2014 about a Dutch doctor who The Guardian reported was suspended for poor care of a patient.” [NYT / Adam Satariano]
  • “French scam artist Michael Francois Bujaldon is using the GDPR to attempt to remove traces of his United States District Court case from the internet. He has already succeeded in compelling PacerMonitor to remove his case.” [PlainSite]
  • “In the last 5 days, we’ve had requests under GDPR to delete three separate articles … all about US lawsuits concerning scams committed by Europeans. That ‘right to be forgotten’ is working out just great, huh guys?” [Mike Masnick]

Article 20: Right to data portability

  • Data portability increases the attack surface for bad actors to exploit. In a sense, the Cambridge Analytica scandal was a case of too much data portability.
  • “The problem with data portability is that it goes both ways: if you can take your data out of Facebook to other applications, you can do the same thing in the other direction. The question, then, is which entity is likely to have the greater center of gravity with regards to data: Facebook, with its social network, or practically anything else?” [Stratechery / Ben Thompson]
  • “Presumably data portability would be imposed on Facebook’s competitors and potential competitors as well.  That would mean all future competing firms would have to slot their products into a Facebook-compatible template.  Let’s say that 17 years from now someone has a virtual reality social network innovation: does it have to be “exportable” into Facebook and other competitors?  It’s hard to think of any better way to stifle innovation.” [Marginal Revolution / Tyler Cowen]

Article 21: Right to opt out of data processing

  • “[B]y restricting companies from limiting services or increasing prices for consumers who opt-out of sharing personal data, these frameworks enable free riders—individuals that opt out but still expect the same services and price—and undercut access to free content and services.” [ITIF / Alan McQuinn and Daniel Castro]

Compliance costs are astronomical

  • Prior to GDPR going into effect, “PwC surveyed 200 companies with more than 500 employees and found that 68% planned on spending between $1 and $10 million to meet the regulation’s requirements. Another 9% planned to spend more than $10 million. With over 19,000 U.S. firms of this size, total GDPR compliance costs for this group could reach $150 billion.” [Fortune / Daniel Castro and Michael McLaughlin]
  • “[T]he International Association of Privacy Professionals (IAPP) estimates 500,000 European organizations have registered data protection officers (DPOs) within the first year of the General Data Protection Regulation (GDPR). According to a recent IAPP salary survey, the average DPO’s salary in Europe is $88,000.” [IAPP]
  • As of March 20, 2019, 1,129 US news sites are still unavailable in the EU due to GDPR. [Joseph O’Connor]
  • Microsoft had 1,600 engineers working on GDPR compliance. [Microsoft]
  • During a Senate hearing, Keith Enright, Google’s chief privacy officer, estimated that the company spent “hundreds of years of human time” to comply with the new privacy rules. [Quartz / Ashley Rodriguez]
    • However, French authorities ultimately decided Google’s compliance efforts were insufficient: “France fines Google nearly $57 million for first major violation of new European privacy regime” [Washington Post / Tony Romm]
  • “About 220,000 name tags will be removed in Vienna by the end of [2018], the city’s housing authority said. Officials fear that they could otherwise be fined up to $23 million, or about $1,150 per name.” [Washington Post / Rick Noack]
    UPDATE: Wolfie Christl pointed out on Twitter that the order to remove name tags was rescinded after only 11,000 name tags were removed due to public backlash and what Housing Councilor Kathrin Gaal said were “different legal opinions on the subject.”

Tradeoff between privacy regulations and market competition

“On the big guys increasing market share? I don’t believe [the law] will have such a consequence.” Věra Jourová, the European Commissioner for Justice, Consumers and Gender Equality [WSJ / Sam Schechner and Nick Kostov]

“Mentioned GDPR to the head of a European media company. ‘Gift to Google and Facebook, enormous regulatory own-goal.'” [Benedict Evans]

Source: WSJ
  • “Hundreds of companies compete to place ads on webpages or collect data on their users, led by Google, Facebook and their subsidiaries. The European Union’s General Data Protection Regulation, which took effect in May, imposes stiff requirements on such firms and the websites who use them. After the rule took effect in May, Google’s tracking software appeared on slightly more websites, Facebook’s on 7% fewer, while the smallest companies suffered a 32% drop, according to Ghostery, which develops privacy-enhancing web technology.” [WSJ / Greg Ip]
  • Havas SA, one of the world’s largest buyers of ads, says it observed a low double-digit percentage increase in advertisers’ spending through DBM on Google’s own ad exchange on the first day the law went into effect, according to Hossein Houssaini, Havas’s global head of programmatic solutions. On the selling side, companies that help publishers sell ad inventory have seen declines in bids coming through their platforms from Google. Paris-based Smart says it has seen a roughly 50% drop. [WSJ / Nick Kostov and Sam Schechner]
  • “The consequence was that just hours after the law’s enforcement, numerous independent ad exchanges and other vendors watched their ad demand volumes drop between 20 and 40 percent. But with agencies free to still buy demand on Google’s marketplace, demand on AdX spiked. The fact that Google’s compliance strategy has ended up hurting its competitors and redirecting higher demand back to its own marketplace, where it can guarantee it has user consent, has unsettled publishers and ad tech vendors.” [Digiday / Jessica Davies]

Unseen costs of forgone investment & research

  • Startups: One study estimated that venture capital invested in EU startups fell by as much as 50 percent due to GDPR implementation: “Specifically, our findings suggest a $3.38 million decrease in the aggregate dollars raised by EU ventures per state per crude industry category per week, a 17.6% reduction in the number of weekly venture deals, and a 39.6% decrease in the amount raised in an average deal following the rollout of GDPR … We use our results to provide a back-of-the-envelope calculation of a range of job losses that may be incurred by these ventures, which we estimate to be between 3,604 to 29,819 jobs.” [NBER / Jian Jia, Ginger Zhe Jin, and Liad Wagman]
  • Mergers and acquisitions: “55% of respondents said they had worked on deals that fell apart because of concerns about a target company’s data protection policies and compliance with GDPR” [WSJ / Nina Trentmann]
  • Scientific research: “[B]iomedical researchers fear that the EU’s new General Data Protection Regulation (GDPR) will make it harder to share information across borders or outside their original research context.” [Politico / Sarah Wheaton]

GDPR graveyard

Small and medium-sized businesses (SMBs) have left the EU market in droves (or shut down entirely). Here is a partial list:

Blockchain & P2P Services

  • CoinTouch, peer-to-peer cryptocurrency exchange
  • FamilyTreeDNA, free and public genetic tools
    • Mitosearch
    • Ysearch
  • Monal, XMPP chat app
  • Parity, know-your-customer service for initial coin offerings (ICOs)
  • Seznam, social network for students
  • StreetLend, tool sharing platform for neighbors

Marketing

  • Drawbridge, cross-device identity service
  • Klout, social reputation service by Lithium
  • Unroll.me, inbox management app
  • Verve, mobile programmatic advertising

Video Games

Other

The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Continue Reading...