Archives For privacy

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

Despite calls from some NGOs to mandate radical interoperability, the EU’s draft Digital Markets Act (DMA) adopted a more measured approach, requiring full interoperability only in “ancillary” services like identification or payment systems. There remains the possibility, however, that the DMA proposal will be amended to include stronger interoperability mandates, or that such amendments will be introduced in the Digital Services Act. Without the right checks and balances, this could pose grave threats to Europeans’ privacy and security.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. Expanded interoperability could offer promising solutions to some of today’s difficult problems. For example, it might allow third-party developers to offer different “flavors” of social media news feed, with varying approaches to content ranking and moderation (see Daphne Keller, Mike Masnick, and Stephen Wolfram for more on that idea). After all, in a pluralistic society, someone will always be unhappy with what some others consider appropriate content. Why not let smaller groups decide what they want to see? 

But to achieve that goal using currently available technology, third-party developers would have to be able to access all of a platform’s content that is potentially available to a user. This would include not just content produced by users who explicitly agrees for their data to be shared with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It doesn’t require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

It is telling that supporters of this kind of interoperability use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns due to its prioritization of interoperability.

It also is telling that supporters of interoperability tend to point to what are small-scale platforms (e.g., Mastodon) or protocols with unacceptably poor usability for most of today’s Internet users (e.g., Usenet). When proposing solutions to potential privacy problems—e.g., that users will adequately monitor how various platforms use their data—they often assume unrealistic levels of user interest or technical acumen.

Interoperability in the DMA

The current draft of the DMA contains several provisions that broadly construe interoperability as applying only to “gatekeepers”—i.e., the largest online platforms:

  1. Mandated interoperability of “ancillary services” (Art 6(1)(f)); 
  2. Real-time data portability (Art 6(1)(h)); and
  3. Business-user access to their own and end-user data (Art 6(1)(i)). 

The first provision, (Art 6(1)(f)), is meant to force gatekeepers to allow e.g., third-party payment or identification services—for example, to allow people to create social media accounts without providing an email address, which is possible using services like “Sign in with Apple.” This kind of interoperability doesn’t pose as big of a privacy risk as mandated interoperability of “core” services (e.g., messaging on a platform like WhatsApp or Signal), partially due to a more limited scope of data that needs to be exchanged.

However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately. Of course,there are likely trade-offs between those protections and the interoperability that some want. Proponents of stronger interoperability want this provision amended to cover all “core” services, not just “ancillary” ones, which would constitute precisely the kind of radical interoperability that cannot be safely mandated today.

The other two provisions do not mandate full two-way interoperability, where a third party could both read data from a service like Facebook and modify content on that service. Instead, they provide for one-way “continuous and real-time” access to data—read-only.

The second provision (Art 6(1)(h)) mandates that gatekeepers give users effective “continuous and real-time” access to data “generated through” their activity. It’s not entirely clear whether this provision would be satisfied by, e.g., Facebook’s Graph API, but it likely would not be satisfied simply by being able to download one’s Facebook data, as that is not “continuous and real-time.”

Importantly, the proposed provision explicitly references the General Data Protection Regulation (GDPR), which suggests that—at least as regards personal data—the scope of this portability mandate is not meant to be broader than that from Article 20 GDPR. Given the GDPR reference and the qualification that it applies to data “generated through” the user’s activity, this mandate would not include data generated by other users—which is welcome, but likely will not satisfy the proponents of stronger interoperability.

The third provision from Art 6(1)(i) mandates only “continuous and real-time” data access and only as regards data “provided for or generated in the context of the use of the relevant core platform services” by business users and by “the end users engaging with the products or services provided by those business users.” This provision is also explicitly qualified with respect to personal data, which are to be shared after GDPR-like user consent and “only where directly connected with the use effectuated by the end user in respect of” the business user’s service. The provision should thus not be a tool for a new Cambridge Analytica to siphon data on users who interact with some Facebook page or app and their unwitting contacts. However, for the same reasons, it will also not be sufficient for the kinds of uses that proponents of stronger interoperability envisage.

Why can’t stronger interoperability be safely mandated today?

Let’s imagine that Art 6(1)(f) is amended to cover all “core” services, so gatekeepers like Facebook end up with a legal duty to allow third parties to read data from and write data to Facebook via APIs. This would go beyond what is currently possible using Facebook’s Graph API, and would lack the current safety valve of Facebook cutting off access because of the legal duty to deal created by the interoperability mandate. As Cory Doctorow and Bennett Cyphers note, there are at least three categories of privacy and security risks in this situation:

1. Data sharing and mining via new APIs;

2. New opportunities for phishing and sock puppetry in a federated ecosystem; and

3. More friction for platforms trying to maintain a secure system.

Unlike some other proponents of strong interoperability, Doctorow and Cyphers are open about the scale of the risk: “[w]ithout new legal safeguards to protect the privacy of user data, this kind of interoperable ecosystem could make Cambridge Analytica-style attacks more common.”

There are bound to be attempts to misuse interoperability through clearly criminal activity. But there also are likely to be more legally ambiguous attempts that are harder to proscribe ex ante. Proposals for strong interoperability mandates need to address this kind of problem.

So, what could be done to make strong interoperability reasonably safe? Doctorow and Cyphers argue that there is a “need for better privacy law,” but don’t say whether they think the GDPR’s rules fit the bill. This may be a matter of reasonable disagreement.

What isn’t up for serious debate is that the current framework and practice of privacy enforcement offers little confidence that misuses of strong interoperability would be detected and prosecuted, much less that they would be prevented (see here and here on GDPR enforcement). This is especially true for smaller and “judgment-proof” rule-breakers, including those from outside the European Union. Addressing the problems of privacy law enforcement is a herculean task, in and of itself.

The day may come when radical interoperability will, thanks to advances in technology and/or privacy enforcement, become acceptably safe. But it would be utterly irresponsible to mandate radical interoperability in the DMA and/or DSA, and simply hope the obvious privacy and security problems will somehow be solved before the law takes force. Instituting such a mandate would likely discredit the very idea of interoperability.

The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.

The regulation categorizes AIs by the kind and extent of risk they may pose to health, safety, and fundamental rights, with the overarching goal to:

  • Prohibit “unacceptable risk” AIs outright;
  • Place strict restrictions on “high-risk” AIs;
  • Place minor restrictions on “limited-risk” AIs;
  • Create voluntary “codes of conduct” for “minimal-risk” AIs;
  • Establish a regulatory sandbox regime for AI systems; 
  • Set up a European Artificial Intelligence Board to oversee regulatory implementation; and
  • Set fines for noncompliance at up to 30 million euros, or 6% of worldwide turnover, whichever is greater.

AIs That Are Prohibited Outright

The regulation prohibits AI that are used to exploit people’s vulnerabilities or that use subliminal techniques to distort behavior in a way likely to cause physical or psychological harm. Also prohibited are AIs used by public authorities to give people a trustworthiness score, if that score would then be used to treat a person unfavorably in a separate context or in a way that is disproportionate. The regulation also bans the use of “real-time” remote biometric identification (such as facial-recognition technology) in public spaces by law enforcement, with exceptions for specific and limited uses, such as searching for a missing child.

The first prohibition raises some interesting questions. The regulation says that an “exploited vulnerability” must relate to age or disability. In its announcement, the commission says this is targeted toward AIs such as toys that might induce a child to engage in dangerous behavior.

The ban on AIs using “subliminal techniques” is more opaque. The regulation doesn’t give a clear definition of what constitutes a “subliminal technique,” other than that it must be something “beyond a person’s consciousness.” Would this include TikTok’s algorithm, which imperceptibly adjusts the videos shown to the user to keep them engaged on the platform? The notion that this might cause harm is not fanciful, but it’s unclear whether the provision would be interpreted to be that expansive, whatever the commission’s intent might be. There is at least a risk that this provision would discourage innovative new uses of AI, causing businesses to err on the side of caution to avoid the huge penalties that breaking the rules would incur.

The prohibition on AIs used for social scoring is limited to public authorities. That leaves space for socially useful expansions of scoring systems, such as consumers using their Uber rating to show a record of previous good behavior to a potential Airbnb host. The ban is clearly oriented toward more expansive and dystopian uses of social credit systems, which some fear may be used to arbitrarily lock people out of society.

The ban on remote biometric identification AI is similarly limited to its use by law enforcement in public spaces. The limited exceptions (preventing an imminent terrorist attack, searching for a missing child, etc.) would be subject to judicial authorization except in cases of emergency, where ex-post authorization can be sought. The prohibition leaves room for private enterprises to innovate, but all non-prohibited uses of remote biometric identification would be subject to the requirements for high-risk AIs.

Restrictions on ‘High-Risk’ AIs

Some AI uses are not prohibited outright, but instead categorized as “high-risk” and subject to strict rules before they can be used or put to market. AI systems considered to be high-risk include those used for:

  • Safety components for certain types of products;
  • Remote biometric identification, except those uses that are banned outright;
  • Safety components in the management and operation of critical infrastructure, such as gas and electricity networks;
  • Dispatching emergency services;
  • Educational admissions and assessments;
  • Employment, workers management, and access to self-employment;
  • Evaluating credit-worthiness;
  • Assessing eligibility to receive social security benefits or services;
  • A range of law-enforcement purposes (e.g., detecting deepfakes or predicting the occurrence of criminal offenses);
  • Migration, asylum, and border-control management; and
  • Administration of justice.

While the commission considers these AIs to be those most likely to cause individual or social harm, it may not have appropriately balanced those perceived harms with the onerous regulatory burdens placed upon their use.

As Mikołaj Barczentewicz at the Surrey Law and Technology Hub has pointed out, the regulation would discourage even simple uses of logic or machine-learning systems in such settings as education or workplaces. This would mean that any workplace that develops machine-learning tools to enhance productivity—through, for example, monitoring or task allocation—would be subject to stringent requirements. These include requirements to have risk-management systems in place, to use only “high quality” datasets, and to allow human oversight of the AI, as well as other requirements around transparency and documentation.

The obligations would apply to any companies or government agencies that develop an AI (or for whom an AI is developed) with a view toward marketing it or putting it into service under their own name. The obligations could even attach to distributors, importers, users, or other third parties if they make a “substantial modification” to the high-risk AI, market it under their own name, or change its intended purpose—all of which could potentially discourage adaptive use.

Without going into unnecessary detail regarding each requirement, some are likely to have competition- and innovation-distorting effects that are worth discussing.

The rule that data used to train, validate, or test a high-risk AI has to be high quality (“relevant, representative, and free of errors”) assumes that perfect, error-free data sets exist, or can easily be detected. Not only is this not necessarily the case, but the requirement could impose an impossible standard on some activities. Given this high bar, high-risk AIs that use data of merely “good” quality could be precluded. It also would cut against the frontiers of research in artificial intelligence, where sometimes only small and lower-quality datasets are available to train AI. A predictable effect is that the rule would benefit large companies that are more likely to have access to large, high-quality datasets, while rules like the GDPR make it difficult for smaller companies to acquire that data.

High-risk AIs also must submit technical and user documentation that detail voluminous information about the AI system, including descriptions of the AI’s elements, its development, monitoring, functioning, and control. These must demonstrate the AI complies with all the requirements for high-risk AIs, in addition to documenting its characteristics, capabilities, and limitations. The requirement to produce vast amounts of information represents another potentially significant compliance cost that will be particularly felt by startups and other small and medium-sized enterprises (SMEs). This could further discourage AI adoption within the EU, as European enterprises already consider liability for potential damages and regulatory obstacles as impediments to AI adoption.

The requirement that the AI be subject to human oversight entails that the AI can be overseen and understood by a human being and that the AI can never override a human user. While it may be important that an AI used in, say, the criminal justice system must be understood by humans, this requirement could inhibit sophisticated uses beyond the reasoning of a human brain, such as how to safely operate a national electricity grid. Providers of high-risk AI systems also must establish a post-market monitoring system to evaluate continuous compliance with the regulation, representing another potentially significant ongoing cost for the use of high-risk AIs.

The regulation also places certain restrictions on “limited-risk” AIs, notably deepfakes and chatbots. Such AIs must be labeled to make a user aware they are looking at or listening to manipulated images, video, or audio. AIs must also be labeled to ensure humans are aware when they are speaking to an artificial intelligence, where this is not already obvious.

Taken together, these regulatory burdens may be greater than the benefits they generate, and could chill innovation and competition. The impact on smaller EU firms, which already are likely to struggle to compete with the American and Chinese tech giants, could prompt them to move outside the European jurisdiction altogether.

Regulatory Support for Innovation and Competition

To reduce the costs of these rules, the regulation also includes a new regulatory “sandbox” scheme. The sandboxes would putatively offer environments to develop and test AIs under the supervision of competent authorities, although exposure to liability would remain for harms caused to third parties and AIs would still have to comply with the requirements of the regulation.

SMEs and startups would have priority access to the regulatory sandboxes, although they must meet the same eligibility conditions as larger competitors. There would also be awareness-raising activities to help SMEs and startups to understand the rules; a “support channel” for SMEs within the national regulator; and adjusted fees for SMEs and startups to establish that their AIs conform with requirements.

These measures are intended to prevent the sort of chilling effect that was seen as a result of the GDPR, which led to a 17% increase in market concentration after it was introduced. But it’s unclear that they would accomplish this goal. (Notably, the GDPR contained similar provisions offering awareness-raising activities and derogations from specific duties for SMEs.) Firms operating in the “sandboxes” would still be exposed to liability, and the only significant difference to market conditions appears to be the “supervision” of competent authorities. It remains to be seen how this arrangement would sufficiently promote innovation as to overcome the burdens placed on AI by the significant new regulatory and compliance costs.

Governance and Enforcement

Each EU member state would be expected to appoint a “national competent authority” to implement and apply the regulation, as well as bodies to ensure high-risk systems conform with rules that require third party-assessments, such as remote biometric identification AIs.

The regulation establishes the European Artificial Intelligence Board to act as the union-wide regulatory body for AI. The board would be responsible for sharing best practices with member states, harmonizing practices among them, and issuing opinions on matters related to implementation.

As mentioned earlier, maximum penalties for marketing or using a prohibited AI (as well as for failing to use high-quality datasets) would be a steep 30 million euros or 6% of worldwide turnover, whichever is greater. Breaking other requirements for high-risk AIs carries maximum penalties of 20 million euros or 4% of worldwide turnover, while maximums of 10 million euros or 2% of worldwide turnover would be imposed for supplying incorrect, incomplete, or misleading information to the nationally appointed regulator.

Is the Commission Overplaying its Hand?

While the regulation only restricts AIs seen as creating risk to society, it defines that risk so broadly and vaguely that benign applications of AI may be included in its scope, intentionally or unintentionally. Moreover, the commission also proposes voluntary codes of conduct that would apply similar requirements to “minimal” risk AIs. These codes—optional for now—may signal the commission’s intent eventually to further broaden the regulation’s scope and application.

The commission clearly hopes it can rely on the “Brussels Effect” to steer the rest of the world toward tighter AI regulation, but it is also possible that other countries will seek to attract AI startups and investment by introducing less stringent regimes.

For the EU itself, more regulation must be balanced against the need to foster AI innovation. Without European tech giants of its own, the commission must be careful not to stifle the SMEs that form the backbone of the European market, particularly if global competitors are able to innovate more freely in the American or Chinese markets. If the commission has got the balance wrong, it may find that AI development simply goes elsewhere, with the EU fighting the battle for the future of AI with one hand tied behind its back.

We can expect a decision very soon from the High Court of Ireland on last summer’s Irish Data Protection Commission (“IDPC”) decision that placed serious impediments in the transfer data across the Atlantic. That decision, coupled with the July 2020 Court of Justice of the European Union (“CJEU”) decision to invalidate the Privacy Shield agreement between the European Union and the United States, has placed the future of transatlantic trade in jeopardy.

In 2015, the EU Schrems decision invalidated the previously longstanding “safe harbor” agreement between the EU and U.S. to ensure data transfers between the two zones complied with EU privacy requirements. The CJEU later invalidated the Privacy Shield agreement that was created in response to Schrems. In its decision, the court reasoned that U.S. foreign intelligence laws like FISA Section 702 and Executive Order 12333—which give the U.S. government broad latitude to surveil data and offer foreign persons few rights to challenge such surveillance—rendered U.S. firms unable to guarantee the privacy protections of EU citizens’ data.

The IDPC’s decision employed the same logic: if U.S. surveillance laws give the government unreviewable power to spy on foreign citizens’ data, then standard contractual clauses—an alternative mechanism for firms for transferring data—are incapable of satisfying the requirements of EU law.

The implications that flow from this are troubling, to say the least. In the worst case, laws like the CLOUD Act could leave a wide swath of U.S. firms practically incapable doing business in the EU. In the slightly less bad case, firms could be forced to completely localize their data and disrupt the economies of scale that flow from being able to process global data in a unified manner. In any case, the costs for compliance will be massive.

But even if the Irish court upholds the IDPC’s decision, there could still be a path forward for the U.S. and EU to preserve transatlantic digital trade. EU Commissioner for Justice Didier Reynders and U.S. Commerce Secretary Gina Raimondo recently issued a joint statement asserting they are “intensifying” negotiations to develop an enhanced successor to the EU-US Privacy Shield agreement. One can hope the talks are both fast and intense.

It seems unlikely that the Irish High Court would simply overturn the IDPC’s ruling. Instead, the IDCP’s decision will likely be upheld, possibly with recommended modifications. But even in that case, there is a process that buys the U.S. and EU a bit more time before any transatlantic trade involving consumer data grinds to a halt.

After considering replies to its draft decision, the IDPC would issue final recommendations on the extent of the data-transfer suspensions it deems necessary. It would then need to harmonize its recommendations with the other EU data-protection authorities. Theoretically, that could occur in a matter of days, but practically speaking, it would more likely occur over weeks or months. Assuming we get a decision from the Irish High Court before the end of April, it puts the likely deadline for suspension of transatlantic data transfers somewhere between June and September.

That’s not great, but it is not an impossible hurdle to overcome and there are temporary fixes the Biden administration could put in place. Two major concerns need to be addressed.

  1. U.S. data collection on EU citizens needs to be proportional to the necessities of intelligence gathering. Currently, the U.S. intelligence agencies have wide latitude to collect a large amount of data.
  2. The ombudsperson the Privacy Shield agreement created to be responsible for administering foreign citizen data requests was not sufficiently insulated from the political process, creating the need for adequate redress by EU citizens.

As Alex Joel recently noted, the Biden administration has ample powers to effect many of these changes through executive action. After all, EO 12333 was itself a creation of the executive branch. Other changes necessary to shape foreign surveillance to be in accord with EU requirements could likewise arise from the executive branch.

Nonetheless, Congress should not take that as a cue for complacency. It is possible that even if the Biden administration acts, the CJEU could find some or all of the measures insufficient. As the Biden team works to put changes in place through executive order, Congress should pursue surveillance reform through legislation.

Theoretically, the above fixes should be possible; there is not much partisan rancor about transatlantic trade as a general matter. But time is short, and this should be a top priority on policymakers’ radars.

(note: edited to clarify that the Irish High Court is not reviewing SCC’s directly and that the CLOUD Act would not impose legal barriers for firms, but practical ones).

Policy discussions about the use of personal data often have “less is more” as a background assumption; that data is overconsumed relative to some hypothetical optimal baseline. This overriding skepticism has been the backdrop for sweeping new privacy regulations, such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR).

More recently, as part of the broad pushback against data collection by online firms, some have begun to call for creating property rights in consumers’ personal data or for data to be treated as labor. Prominent backers of the idea include New York City mayoral candidate Andrew Yang and computer scientist Jaron Lanier.

The discussion has escaped the halls of academia and made its way into popular media. During a recent discussion with Tesla founder Elon Musk, comedian and podcast host Joe Rogan argued that Facebook is “one gigantic information-gathering business that’s decided to take all of the data that people didn’t know was valuable and sell it and make f***ing billions of dollars.” Musk appeared to agree.

The animosity exhibited toward data collection might come as a surprise to anyone who has taken Econ 101. Goods ideally end up with those who value them most. A firm finding profitable ways to repurpose unwanted scraps is just the efficient reallocation of resources. This applies as much to personal data as to literal trash.

Unfortunately, in the policy sphere, few are willing to recognize the inherent trade-off between the value of privacy, on the one hand, and the value of various goods and services that rely on consumer data, on the other. Ideally, policymakers would look to markets to find the right balance, which they often can. When the transfer of data is hardwired into an underlying transaction, parties have ample room to bargain.

But this is not always possible. In some cases, transaction costs will prevent parties from bargaining over the use of data. The question is whether such situations are so widespread as to justify the creation of data property rights, with all of the allocative inefficiencies they entail. Critics wrongly assume the solution is both to create data property rights and to allocate them to consumers. But there is no evidence to suggest that, at the margin, heightened user privacy necessarily outweighs the social benefits that new data-reliant goods and services would generate. Recent experience in the worlds of personalized medicine and the fight against COVID-19 help to illustrate this point.

Data Property Rights and Personalized Medicine

The world is on the cusp of a revolution in personalized medicine. Advances such as the improved identification of biomarkers, CRISPR genome editing, and machine learning, could usher a new wave of treatments to markedly improve health outcomes.

Personalized medicine uses information about a person’s own genes or proteins to prevent, diagnose, or treat disease. Genetic-testing companies like 23andMe or Family Tree DNA, with the large troves of genetic information they collect, could play a significant role in helping the scientific community to further medical progress in this area.

However, despite the obvious potential of personalized medicine, many of its real-world applications are still very much hypothetical. While governments could act in any number of ways to accelerate the movement’s progress, recent policy debates have instead focused more on whether to create a system of property rights covering personal genetic data.

Some raise concerns that it is pharmaceutical companies, not consumers, who will reap the monetary benefits of the personalized medicine revolution, and that advances are achieved at the expense of consumers’ and patients’ privacy. They contend that data property rights would ensure that patients earn their “fair” share of personalized medicine’s future profits.

But it’s worth examining the other side of the coin. There are few things people value more than their health. U.S. governmental agencies place the value of a single life at somewhere between $1 million and $10 million. The commonly used quality-adjusted life year metric offers valuations that range from $50,000 to upward of $300,000 per incremental year of life.

It therefore follows that the trivial sums users of genetic-testing kits might derive from a system of data property rights would likely be dwarfed by the value they would enjoy from improved medical treatments. A strong case can be made that policymakers should prioritize advancing the emergence of new treatments, rather than attempting to ensure that consumers share in the profits generated by those potential advances.

These debates drew increased attention last year, when 23andMe signed a strategic agreement with the pharmaceutical company Almirall to license the rights related to an antibody Almirall had developed. Critics pointed out that 23andMe’s customers, whose data had presumably been used to discover the potential treatment, received no monetary benefits from the deal. Journalist Laura Spinney wrote in The Guardian newspaper:

23andMe, for example, asks its customers to waive all claims to a share of the profits arising from such research. But given those profits could be substantial—as evidenced by the interest of big pharma—shouldn’t the company be paying us for our data, rather than charging us to be tested?

In the deal’s wake, some argued that personal health data should be covered by property rights. A cardiologist quoted in Fortune magazine opined: “I strongly believe that everyone should own their medical data—and they have a right to that.” But this strong belief, however widely shared, ignores important lessons that law and economics has to teach about property rights and the role of contractual freedom.

Why Do We Have Property Rights?

Among the many important features of property rights is that they create “excludability,” the ability of economic agents to prevent third parties from using a given item. In the words of law professor Richard Epstein:

[P]roperty is not an individual conception, but is at root a social conception. The social conception is fairly and accurately portrayed, not by what it is I can do with the thing in question, but by who it is that I am entitled to exclude by virtue of my right. Possession becomes exclusive possession against the rest of the world…

Excludability helps to facilitate the trade of goods, offers incentives to create those goods in the first place, and promotes specialization throughout the economy. In short, property rights create a system of exclusion that supports creating and maintaining valuable goods, services, and ideas.

But property rights are not without drawbacks. Physical or intellectual property can lead to a suboptimal allocation of resources, namely market power (though this effect is often outweighed by increased ex ante incentives to create and innovate). Similarly, property rights can give rise to thickets that significantly increase the cost of amassing complementary pieces of property. Often cited are the historic (but contested) examples of tolling on the Rhine River or the airplane patent thicket of the early 20th century. Finally, strong property rights might also lead to holdout behavior, which can be addressed through top-down tools, like eminent domain, or private mechanisms, like contingent contracts.

In short, though property rights—whether they cover physical or information goods—can offer vast benefits, there are cases where they might be counterproductive. This is probably why, throughout history, property laws have evolved to achieve a reasonable balance between incentives to create goods and to ensure their efficient allocation and use.

Personal Health Data: What Are We Trying to Incentivize?

There are at least three critical questions we should ask about proposals to create property rights over personal health data.

  1. What goods or behaviors would these rights incentivize or disincentivize that are currently over- or undersupplied by the market?
  2. Are goods over- or undersupplied because of insufficient excludability?
  3. Could these rights undermine the efficient use of personal health data?

Much of the current debate centers on data obtained from direct-to-consumer genetic-testing kits. In this context, almost by definition, firms only obtain consumers’ genetic data with their consent. In western democracies, the rights to bodily integrity and to privacy generally make it illegal to administer genetic tests against a consumer or patient’s will. This makes genetic information naturally excludable, so consumers already benefit from what is effectively a property right.

When consumers decide to use a genetic-testing kit, the terms set by the testing firm generally stipulate how their personal data will be used. 23andMe has a detailed policy to this effect, as does Family Tree DNA. In the case of 23andMe, consumers can decide whether their personal information can be used for the purpose of scientific research:

You have the choice to participate in 23andMe Research by providing your consent. … 23andMe Research may study a specific group or population, identify potential areas or targets for therapeutics development, conduct or support the development of drugs, diagnostics or devices to diagnose, predict or treat medical or other health conditions, work with public, private and/or nonprofit entities on genetic research initiatives, or otherwise create, commercialize, and apply this new knowledge to improve health care.

Because this transfer of personal information is hardwired into the provision of genetic-testing services, there is space for contractual bargaining over the allocation of this information. The right to use personal health data will go toward the party that values it most, especially if information asymmetries are weeded out by existing regulations or business practices.

Regardless of data property rights, consumers have a choice: they can purchase genetic-testing services and agree to the provider’s data policy, or they can forgo the services. The service provider cannot obtain the data without entering into an agreement with the consumer. While competition between providers will affect parties’ bargaining positions, and thus the price and terms on which these services are provided, data property rights likely will not.

So, why do consumers transfer control over their genetic data? The main reason is that genetic information is inaccessible and worthless without the addition of genetic-testing services. Consumers must pass through the bottleneck of genetic testing for their genetic data to be revealed and transformed into usable information. It therefore makes sense to transfer the information to the service provider, who is in a much stronger position to draw insights from it. From the consumer’s perspective, the data is not even truly “transferred,” as the consumer had no access to it before the genetic-testing service revealed it. The value of this genetic information is then netted out in the price consumers pay for testing kits.

If personal health data were undersupplied by consumers and patients, testing firms could sweeten the deal and offer them more in return for their data. U.S. copyright law covers original compilations of data, while EU law gives 15 years of exclusive protection to the creators of original databases. Legal protections for trade secrets could also play some role. Thus, firms have some incentives to amass valuable health datasets.

But some critics argue that health data is, in fact, oversupplied. Generally, such arguments assert that agents do not account for the negative privacy externalities suffered by third-parties, such as adverse-selection problems in insurance markets. For example, Jay Pil Choi, Doh Shin Jeon, and Byung Cheol Kim argue:

Genetic tests are another example of privacy concerns due to informational externalities. Researchers have found that some subjects’ genetic information can be used to make predictions of others’ genetic disposition among the same racial or ethnic category.  … Because of practical concerns about privacy and/or invidious discrimination based on genetic information, the U.S. federal government has prohibited insurance companies and employers from any misuse of information from genetic tests under the Genetic Information Nondiscrimination Act (GINA).

But if these externalities exist (most of the examples cited by scholars are hypothetical), they are likely dwarfed by the tremendous benefits that could flow from the use of personal health data. Put differently, the assertion that “excessive” data collection may create privacy harms should be weighed against the possibility that the same collection may also lead to socially valuable goods and services that produce positive externalities.

In any case, data property rights would do little to limit these potential negative externalities. Consumers and patients are already free to agree to terms that allow or prevent their data from being resold to insurers. It is not clear how data property rights would alter the picture.

Proponents of data property rights often claim they should be associated with some form of collective bargaining. The idea is that consumers might otherwise fail to receive their “fair share” of genetic-testing firms’ revenue. But what critics portray as asymmetric bargaining power might simply be the market signaling that genetic-testing services are in high demand, with room for competitors to enter the market. Shifting rents from genetic-testing services to consumers would undermine this valuable price signal and, ultimately, diminish the quality of the services.

Perhaps more importantly, to the extent that they limit the supply of genetic information—for example, because firms are forced to pay higher prices for data and thus acquire less of it—data property rights might hinder the emergence of new treatments. If genetic data is a key input to develop personalized medicines, adopting policies that, in effect, ration the supply of that data is likely misguided.

Even if policymakers do not directly put their thumb on the scale, data property rights could still harm pharmaceutical innovation. If existing privacy regulations are any guide—notably, the previously mentioned GDPR and CCPA, as well as the federal Health Insurance Portability and Accountability Act (HIPAA)—such rights might increase red tape for pharmaceutical innovators. Privacy regulations routinely limit firms’ ability to put collected data to new and previously unforeseen uses. They also limit parties’ contractual freedom when it comes to gathering consumers’ consent.

At the margin, data property rights would make it more costly for firms to amass socially valuable datasets. This would effectively move the personalized medicine space further away from a world of permissionless innovation, thus slowing down medical progress.

In short, there is little reason to believe health-care data is misallocated. Proposals to reallocate rights to such data based on idiosyncratic distributional preferences threaten to stifle innovation in the name of privacy harms that remain mostly hypothetical.

Data Property Rights and COVID-19

The trade-off between users’ privacy and the efficient use of data also has important implications for the fight against COVID-19. Since the beginning of the pandemic, several promising initiatives have been thwarted by privacy regulations and concerns about the use of personal data. This has potentially prevented policymakers, firms, and consumers from putting information to its optimal social use. High-profile issues have included:

Each of these cases may involve genuine privacy risks. But to the extent that they do, those risks must be balanced against the potential benefits to society. If privacy concerns prevent us from deploying contact tracing or green passes at scale, we should question whether the privacy benefits are worth the cost. The same is true for rules that prohibit amassing more data than is strictly necessary, as is required by data-minimization obligations included in regulations such as the GDPR.

If our initial question was instead whether the benefits of a given data-collection scheme outweighed its potential costs to privacy, incentives could be set such that competition between firms would reduce the amount of data collected—at least, where minimized data collection is, indeed, valuable to users. Yet these considerations are almost completely absent in the COVID-19-related privacy debates, as they are in the broader privacy debate. Against this backdrop, the case for personal data property rights is dubious.

Conclusion

The key question is whether policymakers should make it easier or harder for firms and public bodies to amass large sets of personal data. This requires asking whether personal data is currently under- or over-provided, and whether the additional excludability that would be created by data property rights would offset their detrimental effect on innovation.

Swaths of personal data currently lie untapped. With the proper incentive mechanisms in place, this idle data could be mobilized to develop personalized medicines and to fight the COVID-19 outbreak, among many other valuable uses. By making such data more onerous to acquire, property rights in personal data might stifle the assembly of novel datasets that could be used to build innovative products and services.

On the other hand, when dealing with diffuse and complementary data sources, transaction costs become a real issue and the initial allocation of rights can matter a great deal. In such cases, unlike the genetic-testing kits example, it is not certain that users will be able to bargain with firms, especially where their personal information is exchanged by third parties.

If optimal reallocation is unlikely, should property rights go to the person covered by the data or to the collectors (potentially subject to user opt-outs)? Proponents of data property rights assume the first option is superior. But if the goal is to produce groundbreaking new goods and services, granting rights to data collectors might be a superior solution. Ultimately, this is an empirical question.

As Richard Epstein puts it, the goal is to “minimize the sum of errors that arise from expropriation and undercompensation, where the two are inversely related.” Rather than approach the problem with the preconceived notion that initial rights should go to users, policymakers should ensure that data flows to those economic agents who can best extract information and knowledge from it.

As things stand, there is little to suggest that the trade-offs favor creating data property rights. This is not an argument for requisitioning personal information or preventing parties from transferring data as they see fit, but simply for letting markets function, unfettered by misguided public policies.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Researcher, Liege Competition & Innovation Institute; Senior Fellow, ICLE).]

Privacy absolutism is the misguided belief that protecting citizens’ privacy supersedes all other policy goals, especially economic ones. This is a mistake. Privacy is one value among many, not an end in itself. Unfortunately, the absolutist worldview has filtered into policymaking and is beginning to have very real consequences. Readers need look no further than contact tracing applications and the fight against Covid-19.

Covid-19 has presented the world with a privacy conundrum worthy of the big screen. In fact, it’s a plotline we’ve seen before. Moviegoers will recall that, in the wildly popular film “The Dark Knight”, Batman has to decide between preserving the privacy of Gotham’s citizens or resorting to mass surveillance in order to defeat the Joker. Ultimately, the caped crusader begrudgingly chooses the latter. Before the Covid-19 outbreak, this might have seemed like an unrealistic plot twist. Fast forward a couple of months, and it neatly illustrates the difficult decision that most western societies urgently need to make as they consider the use of contract tracing apps to fight Covid-19.

Contact tracing is often cited as one of the most promising tools to safely reopen Covid-19-hit economies. Unfortunately, its adoption has been severely undermined by a barrage of overblown privacy fears.

Take the contact tracing API and App co-developed by Apple and Google. While these firms’ efforts to rapidly introduce contact tracing tools are laudable, it is hard to shake the feeling that they have been holding back slightly. 

In an overt attempt to protect users’ privacy, Apple and Google’s joint offering does not collect any location data (a move that has irked some states). Similarly, both firms have repeatedly stressed that users will have to opt-in to their contact tracing solution (as opposed to the API functioning by default). And, of course, all the data will be anonymous – even for healthcare authorities. 

This is a missed opportunity. Google and Apple’s networks include billions of devices. That puts them in a unique position to rapidly achieve the scale required to successfully enable the tracing of Covid-19 infections. Contact tracing applications need to reach a critical mass of users to be effective. For instance, some experts have argued that an adoption rate of at least 60% is necessary. Unfortunately, existing apps – notably in Singapore, Australia, Norway and Iceland – have struggled to get anywhere near this number. Forcing users to opt-out of Google and Apple’s services could go a long way towards inverting this trend. Businesses could also boost these numbers by making them mandatory for their employees and consumers.

However, it is hard to blame Google or Apple for not pushing the envelope a little bit further. For the best part of a decade, they and other firms have repeatedly faced specious accusations of “surveillance capitalism”. This has notably resulted in heavy-handed regulation (including the GDPR, in the EU, and the CCPA, in California), as well as significant fines and settlements

Those chickens have now come home to roost. The firms that are probably best-placed to implement an effective contact tracing solution simply cannot afford the privacy-related risks. This includes the risk associated with violating existing privacy law, but also potential reputational consequences. 

Matters have also been exacerbated by the overly cautious stance of many western governments, as well as their citizens: 

  • The European Data Protection Board cautioned governments and private sector actors to anonymize location data collected via contact tracing apps. The European Parliament made similar pronouncements.
  • A group of Democratic Senators pushed back against Apple and Google’s contact tracing solution, notably due to privacy considerations.
  • And public support for contact tracing is also critically low. Surveys in the US show that contact tracing is significantly less popular than more restrictive policies, such as business and school closures. Similarly, polls in the UK suggest that between 52% and 62% of Britons would consider using contact tracing applications.
  • Belgium’s initial plans for a contact tracing application were struck down by its data protection authority on account that they did not comply with the GDPR.
  • Finally, across the globe, there has been pushback against so-called “centralized” tracing apps, notably due to privacy fears.

In short, the West’s insistence on maximizing privacy protection is holding back its efforts to combat the joint threats posed by Covid-19 and the unfolding economic recession. 

But contrary to the mass surveillance portrayed in the Dark Knight, the privacy risks entailed by contact tracing are for the most part negligible. State surveillance is hardly a prospect in western democracies. And the risk of data breaches is no greater here than with many other apps and services that we all use daily. To wit, password, email, and identity theft are still, by far, the most common targets for cyber attackers. Put differently, cyber criminals appear to be more interested in stealing assets that can be readily monetized, rather than location data that is almost worthless. This suggests that contact tracing applications, whether centralized or not, are unlikely to be an important target for cyberattackers.

The meagre risks entailed by contact tracing – regardless of how it is ultimately implemented – are thus a tiny price to pay if they enable some return to normalcy. At the time of writing, at least 5,8 million human beings have been infected with Covid-19, causing an estimated 358,000 deaths worldwide. Both Covid-19 and the measures destined to combat it have resulted in a collapse of the global economy – what the IMF has called “the worst economic downturn since the great depression”. Freedoms that the west had taken for granted have suddenly evaporated: the freedom to work, to travel, to see loved ones, etc. Can anyone honestly claim that is not worth temporarily sacrificing some privacy to partially regain these liberties?

More generally, it is not just contact tracing applications and the fight against Covid-19 that have suffered because of excessive privacy fears. The European GDPR offers another salient example. Whatever one thinks about the merits of privacy regulation, it is becoming increasingly clear that the EU overstepped the mark. For instance, an early empirical study found that the entry into force of the GDPR markedly decreased venture capital investments in Europe. Michal Gal aptly summarizes the implications of this emerging body of literature:

The price of data protection through the GDPR is much higher than previously recognized. The GDPR creates two main harmful effects on competition and innovation: it limits competition in data markets, creating more concentrated market structures and entrenching the market power of those who are already strong; and it limits data sharing between different data collectors, thereby preventing the realization of some data synergies which may lead to better data-based knowledge. […] The effects on competition and innovation identified may justify a reevaluation of the balance reached to ensure that overall welfare is increased. 

In short, just like the Dark Knight, policymakers, firms and citizens around the world need to think carefully about the tradeoff that exists between protecting privacy and other objectives, such as saving lives, promoting competition, and increasing innovation. As things stand, however, it seems that many have veered too far on the privacy end of the scale.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ian Adams, (Executive Director, International Center for Law & Economics).]

The COVID-19 crisis has recast virtually every contemporary policy debate in the context of public health, and digital privacy is no exception. Conversations that once focused on the value and manner of tracking to enable behavioral advertising have shifted. Congress, on the heels of years of false-starts and failed efforts to introduce nationwide standards, is now lurching toward framing privacy policy through the lens of  proposed responses to the virus.

To that end, two legislative vehicles, one from Senate Republicans and another from a bicameral group of Democrats, have been offered specifically in response to the hitherto unprecedented occasion that society has to embrace near-universally available technologies to identify, track, and remediate the virus. The bills present different visions of what it means to protect and promote the privacy of Americans in the COVID-19 era, both of which are flawed (though, to differing degrees) as a matter of principle and practice. 

Failure as a matter of principle

Privacy has always been one value among many, not an end in itself, but a consideration to be weighed in the pursuit of life’s many varied activities (a point explored in greater depth here). But while the value of privacy in the context of exigent circumstances has traditionally waned, it has typically done so to make room for otherwise intrusive state action

The COVID-19 crisis presents a different scenario. Now, private firms, not the state, are best positioned to undertake the steps necessary to blunt the virus’ impact and, as good fortune would have it, substantial room already exists within U.S. law for firms to deploy software that would empower people to remediate the virus. Indeed, existing U.S. law affords people the ability to weigh their privacy preferences directly with their level of public health concern.

Strangely, in this context, both political parties have seen fit to advance restrictive privacy visions specific to the COVID-19 crisis that would substantially limit the ability of individuals to use tools to make themselves, and their communities, safer. In other words, both parties have offered proposals that make it harder to achieve the public health outcomes they claim to be seeking at precisely the moment that governments (federal, state, and local) are taking unprecedented (and liberty restricting) steps to achieve exactly those outcomes.

Failure as a matter of practice

The dueling legislative proposals are structured in parallel (a complete breakdown is available here). Each includes provisions concerning the entities and data to be covered, the obligations placed upon entities interacting with covered data, and the scope, extent and power of enforcement measures. While the scope of the entities and data covered vary significantly, with the Democratic proposal encumbering far more of each, they share a provision requiring both “opt-in” consent for access and use of data and a requirement that a mechanism exist to revoke that consent. 

The bipartisan move to affirmative consent represents a significant change in the Congressional privacy conversation. Hitherto, sensitive data have elicited calls for context-dependent levels of privacy, but no previous GOP legislative proposal had suggested the use of an “opt-in” mechanism. The timing of this novel bipartisanship could not be worse because, in the context of COVID-19 response, using the FTC’s 2012 privacy report as a model, the privacy benefits of raising the bar for the adoption of tools to track the course of the virus are likely substantially outweighed by the benefits that don’t just accrue to the covered entity, but to society as a whole with firms relatively freer to experiment with COVID-19-tracking technologies. 

There is another way forward. Instead of introducing design restraints and thereby limiting the practical manner in which firms go about developing tools to address COVID-19, Congress should be moving to articulate discrete harms related to unintended or coerced uses of information that it would like to prevent. For instance: defining what would constitute a deceptive use of COVID-related health information, or clarifying what fraudulent inducement should involve for purposes of downloading a contract tracing app. At least with particularized harms in mind policymakers and the public will more readily be able to assess and balance the value of what is gained in terms of privacy versus what is lost in terms of public health capabilities.

Congress, and the broader public policy debate around privacy, has come to a strange place. The privacy rights that lawmakers are seeking to create, utterly independent of potential privacy harms, pose a substantial new regulatory burden to firms attempting to achieve the very public health outcomes for which society is clamoring. In the process, arguably far more significant impingements upon individual liberty, in the form of largely indiscriminate restrictions on movement, association and commerce, are necessary to achieve what elements of contract tracing promises. That’s not just getting privacy wrong – that’s getting privacy all wrong.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]

The public policy community’s infatuation with digital privacy has grown by leaps and bounds since the enactment of GDPR and the CCPA, but COVID-19 may leave the most enduring mark on the actual direction that privacy policy takes. As the pandemic and associated lockdowns first began, there were interesting discussions cropping up about the inevitable conflict between strong privacy fundamentalism and the pragmatic steps necessary to adequately trace the spread of infection. 

Axiomatic of this controversy is the Apple/Google contact tracing system, software developed for smartphones to assist with the identification of individuals and populations that have likely been in contact with the virus. The debate sparked by the Apple/Google proposal highlights what we miss when we treat “privacy” (however defined) as an end in itself, an end that must necessarily  trump other concerns. 

The Apple/Google contact tracing efforts

Apple/Google are doing yeoman’s work attempting to produce a useful contact tracing API given the headwinds of privacy advocacy they face. Apple’s webpage describing its new contact tracing system is a testament to the extent to which strong privacy protections are central to its efforts. Indeed, those privacy protections are in the very name of the service: “Privacy-Preserving Contact Tracing” program. But, vitally, the utility of the Apple/Google API is ultimately a function of its efficacy as a tracing tool, not in how well it protects privacy.

Apple/Google — despite the complaints of some states — are rolling out their Covid-19-tracking services with notable limitations. Most prominently, the APIs will not allow collection of location data, and will only function when users explicitly opt-in. This last point is important because there is evidence that opt-in requirements, by their nature, tend to reduce the flow of information in a system, and when we are considering tracing solutions to an ongoing pandemic surely less information is not optimal. Further, all of the data collected through the API will be anonymized, preventing even healthcare authorities from identifying particular infected individuals.

These restrictions prevent the tool from being as effective as it could be, but it’s not clear how Apple/Google could do any better given the political climate. For years, the Big Tech firms have been villainized by privacy advocates that accuse them of spying on kids and cavalierly disregarding consumer privacy as they treat individuals’ data as just another business input. The problem with this approach is that, in the midst of a generational crisis, our best tools are being excluded from the fight. Which begs the question: perhaps we have privacy all wrong? 

Privacy is one value among many

The U.S. constitutional order explicitly protects our privacy as against state intrusion in order to guarantee, among other things, fair process and equal access to justice. But this strong presumption against state intrusion—far from establishing a fundamental or absolute right to privacy—only accounts for part of the privacy story. 

The Constitution’s limit is a recognition of the fact that we humans are highly social creatures and that privacy is one value among many. Properly conceived, privacy protections are themselves valuable only insofar as they protect other things we value. Jane Bambauer explored some of this in an earlier post where she characterized privacy as, at best, an “instrumental right” — that is a tool used to promote other desirable social goals such as “fairness, safety, and autonomy.”

Following from Jane’s insight, privacy — as an instrumental good — is something that can have both positive and negative externalities, and needs to be enlarged or attenuated as its ability to serve instrumental ends changes in different contexts. 

According to Jane:

There is a moral imperative to ignore even express lack of consent when withholding important information that puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy …  this same logic applies at scale to the collection and analysis of data during a pandemic.

Indeed, dealing with externalities is one of the most common and powerful justifications for regulation, and an extreme form of “privacy libertarianism” —in the context of a pandemic — is likely to be, on net, harmful to society.

Which brings us back to efforts of Apple/Google. Even if those firms wanted to risk the ire of  privacy absolutists, it’s not clear that they could do so without incurring tremendous regulatory risk, uncertainty and a popular backlash. As statutory matters, the CCPA and the GDPR chill experimentation in the face of potentially crippling fines. While the FTC Act’s Section 5 prohibition on “unfair or deceptive” practices is open to interpretation in manners which could result in existentially damaging outcomes. Further, some polling suggests that the public appetite for contact tracing is not particularly high – though, as is often the case, such pro-privacy poll outcomes rarely give appropriate shrift to the tradeoff required.

As a general matter, it’s important to think about the value of individual privacy, and how best to optimally protect it. But privacy does not stand above all other values in all contexts. It is entirely reasonable to conclude that, in a time of emergency, if private firms can devise more effective solutions for mitigating the crisis, they should have more latitude to experiment. Knee-jerk preferences for an amorphous “right of privacy” should not be used to block those experiments.

Much as with the Cosmic Turtle, its tradeoffs all the way down. Most of the U.S. is in lockdown, and while we vigorously protect our privacy, we risk frustrating the creation of tools that could put a light at the end of the tunnel. We are, in effect, trading liberty and economic self-determination for privacy.

Once the worst of the Covid-19 crisis has passed — hastened possibly by the use of contact tracing programs — we can debate the proper use of private data in exigent circumstances. For the immediate future, we should instead be encouraging firms like Apple/Google to experiment with better ways to control the pandemic. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Christine S. Wilson (Commissioner of the U.S. Federal Trade Commission).[1] The views expressed here are the author’s and do not necessarily reflect those of the Federal Trade Commission or any other Commissioner.]  

I type these words while subject to a stay-at-home order issued by West Virginia Governor James C. Justice II. “To preserve public health and safety, and to ensure the healthcare system in West Virginia is capable of serving all citizens in need,” I am permitted to leave my home only for a limited and precisely enumerated set of reasons. Billions of citizens around the globe are now operating under similar shelter-in-place directives as governments grapple with how to stem the tide of infection, illness and death inflicted by the global Covid-19 pandemic. Indeed, the first response of many governments has been to impose severe limitations on physical movement to contain the spread of the novel coronavirus. The second response contemplated by many, and the one on which this blog post focuses, involves the extensive collection and analysis of data in connection with people’s movements and health. Some governments are using that data to conduct sophisticated contact tracing, while others are using the power of the state to enforce orders for quarantines and against gatherings.

The desire to use modern technology on a broad scale for the sake of public safety is not unique to this moment. Technology is intended to improve the quality of our lives, in part by enabling us to help ourselves and one another. For example, cell towers broadcast wireless emergency alerts to all mobile devices in the area to warn us of extreme weather and other threats to safety in our vicinity. One well-known type of broadcast is the Amber Alert, which enables community members to assist in recovering an abducted child by providing descriptions of the abductor, the abductee and the abductor’s vehicle. Citizens who spot individuals and vehicles that meet these descriptions can then provide leads to law enforcement authorities. A private nonprofit organization, the National Center for Missing and Exploited Children, coordinates with state and local public safety officials to send out Amber Alerts through privately owned wireless carriers.

The robust civil society and free market in the U.S. make partnerships between the private sector and government agencies commonplace. But some of these arrangements involve a much more extensive sharing of Americans’ personal information with law enforcement than the emergency alert system does.

For example, Amazon’s home security product Ring advertises itself not only as a way to see when a package has been left at your door, but also as a way to make communities safer by turning over video footage to local police departments. In 2018, the company’s pilot program in Newark, New Jersey, donated more than 500 devices to homeowners to install at their homes in two neighborhoods, with a big caveat. Ring recipients were encouraged to share video with police. According to Ring, home burglaries in those neighborhoods fell by more than 50% from April through July 2018 relative to the same time period a year earlier.

Yet members of Congress and privacy experts have raised concerns about these partnerships, which now number in the hundreds. After receiving Amazon’s response to his inquiry, Senator Edward Markey highlighted Ring’s failure to prevent police from sharing video footage with third parties and from keeping the video permanently, and Ring’s lack of precautions to ensure that users collect footage only of adults and of users’ own property. The House of Representatives Subcommittee on Economic and Consumer Policy continues to investigate Ring’s police partnerships and data policies. The Electronic Frontier Foundation has called Ring “a perfect storm of privacy threats,” while the UK surveillance camera commissioner has warned against “a very real power to understand, to surveil you in a way you’ve never been surveilled before.”

Ring demonstrates clearly that it is not new for potential breaches of privacy to be encouraged in the name of public safety; police departments urge citizens to use Ring and share the videos with police to fight crime. But emerging developments indicate that, in the fight against Covid-19, we can expect to see more and more private companies placed in the difficult position of becoming complicit in government overreach.

At least mobile phone users can opt out of receiving Amber Alerts, and residents can refuse to put Ring surveillance systems on their property. The Covid-19 pandemic has made some other technological intrusions effectively impossible to refuse. For example, online proctors who monitor students over webcams to ensure they do not cheat on exams taken at home were once something that students could choose to accept if they did not want to take an exam where and when they could be proctored face to face. With public schools and universities across the U.S. closed for the rest of the semester, students who refuse to give private online proctors access to their webcams – and, consequently, the ability to view their surroundings – cannot take exams at all.

Existing technology and data practices already have made the Federal Trade Commission sensitive to potential consumer privacy and data security abuses. For decades, this independent, bipartisan agency has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. It brought its first privacy and data security cases nearly 20 years ago, while I was Chief of Staff to then-Chairman Timothy J. Muris. The FTC took on Eli Lilly for disclosing the e-mail addresses of 669 subscribers to its Prozac reminder service – many of whom were government officials, and at a time of greater stigma for mental health issues – and Microsoft for (among other things) falsely claiming that its Passport website sign-in service did not collect any personally identifiable information other than that described in its privacy policy.

The privacy and data security practices of healthcare and software companies are likely to impact billions of people during the current coronavirus pandemic. The U.S. already has many laws on the books that are relevant to practices in these areas. One notable example is the Health Insurance Portability and Accountability Act, which set national standards for the protection of individually identifiable health information by health plans, health care clearinghouses and health care providers who accept non-cash payments. While the FTC does not enforce HIPAA, it does enforce the Health Breach Notification Rule, as well as the provisions in the FTC Act used to challenge the privacy missteps of Eli Lilly and many other companies.

But technological developments have created gaps in HIPAA enforcement. For example, HIPAA applies to doctors’ offices, hospitals and insurance companies, but it may not apply to wearables, smartphone apps or websites. Yet sensitive medical information is now commonly stored in places other than health care practitioners’ offices.  Your phone and watch now collect information about your blood sugar, exercise habits, fertility and heart health. 

Observers have pointed to these emerging gaps in coverage as evidence of the growing need for federal privacy legislation. I, too, have called on the U.S. Congress to enact comprehensive federal privacy legislation – not only to address these emerging gaps, but for two other reasons.  First, consumers need clarity regarding the types of data collected from them, and how those data are used and shared. I believe consumers can make informed decisions about which goods and services to patronize when they have the information they need to evaluate the costs and benefits of using those goods. Second, businesses need predictability and certainty regarding the rules of the road, given the emerging patchwork of regimes both at home and abroad.

Rules of the road regarding privacy practices will prove particularly instructive during this global pandemic, as governments lean on the private sector for data on the grounds that the collection and analysis of data can help avert (or at least diminish to some extent) a public health catastrophe. With legal lines in place, companies would be better equipped to determine when they are being asked to cross the line for the public good, and whether they should require a subpoena or inform customers before turning over data. It is regrettable that Congress has been unable to enact federal privacy legislation to guide this discussion.

Understandably, Congress does not have privacy at the top of its agenda at the moment, as the U.S. faces a public health crisis. As I write, more than 579,000 Americans have been diagnosed with Covid-19, and more than 22,000 have perished. Sadly, those numbers will only increase. And the U.S. is not alone in confronting this crisis: governments globally have confronted more than 1.77 million cases and more than 111,000 deaths. For a short time, health and safety issues may take precedence over privacy protections. But some of the initiatives to combat the coronavirus pandemic are worrisome. We are learning more every day about how governments are responding in a rapidly developing situation; what I describe in the next section constitutes merely the tip of the iceberg. These initiatives are worth highlighting here, as are potential safeguards for privacy and civil liberties that societies around the world would be wise to embrace.

Some observers view public/private partnerships based on an extensive use of technology and data as key to fighting the spread of Covid-19. For example, Professor Jane Bambauer calls for contact tracing and alerts “to be done in an automated way with the help of mobile service providers’ geolocation data.” She argues that privacy is merely “an instrumental right” that “is meant to achieve certain social goals in fairness, safety and autonomy. It is not an end in itself.” Given the “more vital” interests in health and the liberty to leave one’s house, Bambauer sees “a moral imperative” for the private sector “to ignore even express lack of consent” by an individual to the sharing of information about him.

This proposition troubles me because the extensive data sharing that has been proposed in some countries, and that is already occurring in many others, is not mundane. In the name of advertising and product improvements, private companies have been hoovering up personal data for years. What this pandemic lays bare, though, is that while this trove of information was collected under the guise of cataloguing your coffee preferences and transportation habits, it can be reprocessed in an instant to restrict your movements, impinge on your freedom of association, and silence your freedom of speech. Bambauer is calling for detailed information about an individual’s every movement to be shared with the government when, in the United States under normal circumstances, a warrant would be required to access this information.

Indeed, with our mobile devices acting as the “invisible policeman” described by Justice William O. Douglas in Berger v. New York, we may face “a bald invasion of privacy, far worse than the general warrants prohibited by the Fourth Amendment.” Backward-looking searches and data hoards pose new questions of what constitutes a “reasonable” search. The stakes are high – both here and abroad, citizens are being asked to allow warrantless searches by the government on an astronomical scale, all in the name of public health.  

Abroad

The first country to confront the coronavirus was China. The World Health Organization has touted the measures taken by China as “the only measures that are currently proven to interrupt or minimize transmission chains in humans.” Among these measures are the “rigorous tracking and quarantine of close contacts,” as well as “the use of big data and artificial intelligence (AI) to strengthen contact tracing and the management of priority populations.” An ambassador for China has said his government “optimized the protocol of case discovery and management in multiple ways like backtracking the cell phone positioning.” Much as the Communist Party’s control over China enabled it to suppress early reports of a novel coronavirus, this regime vigorously ensured its people’s compliance with the “stark” containment measures described by the World Health Organization.

Before the Covid-19 pandemic, Hong Kong already had been testing the use of “smart wristbands” to track the movements of prisoners. The Special Administrative Region now monitors people quarantined inside their homes by requiring them to wear wristbands that send information to the quarantined individuals’ smartphones and alert the Department of Health and Police if people leave their homes, break their wristbands or disconnect them from their smartphones. When first announced in early February, the wristbands were required only for people who had been to Wuhan in the past 14 days, but the program rapidly expanded to encompass every person entering Hong Kong. The government denied any privacy concerns about the electronic wristbands, saying the Privacy Commissioner for Personal Data had been consulted about the technology and agreed it could be used to ensure that quarantined individuals remain at home.

Elsewhere in Asia, Taiwan’s Chunghwa Telecom has developed a system that the local CDC calls an “electronic fence.” Specifically, the government obtains the SIM card identifiers for the mobile devices of quarantined individuals and passes those identifiers to mobile network operators, which use phone signals to their cell towers to alert public health and law enforcement agencies when the phone of a quarantined individual leaves a certain geographic range. In response to privacy concerns, the National Communications Commission said the system was authorized by special laws to prevent the coronavirus, and that it “does not violate personal data or privacy protection.” In Singapore, travelers and others issued Stay-Home Notices to remain in their residency 24 hours a day for 14 days must respond within an hour if contacted by government agencies by phone, text message or WhatsApp. And to assist with contact tracing, the government has encouraged everyone in the country to download TraceTogether, an app that uses Bluetooth to identify other nearby phones with the app and tracks when phones are in close proximity.

Israel’s Ministry of Health has launched an app for mobile devices called HaMagen (the shield) to prevent the spread of coronavirus by identifying contacts between diagnosed patients and people who came into contact with them in the 14 days prior to diagnosis. In March, the prime minister’s cabinet initially bypassed the legislative body to approve emergency regulations for obtaining without a warrant the cellphone location data and additional personal information of those diagnosed with or suspected of coronavirus infection. The government will send text messages to people who came into contact with potentially infected individuals, and will monitor the potentially infected person’s compliance with quarantine. The Ministry of Health will not hold this information; instead, it can make data requests to the police and Shin Bet, the Israel Security Agency. The police will enforce quarantine measures and Shin Bet will track down those who came into contact with the potentially infected.

Multiple Eastern European nations with constitutional protections for citizens’ rights of movement and privacy have superseded them by declaring a state of emergency. For example, in Hungary the declaration of a “state of danger” has enabled Prime Minister Viktor Orbán’s government to engage in “extraordinary emergency measures” without parliamentary consent.  His ministers have cited the possibility that coronavirus will prevent a gathering of a sufficient quorum of members of Parliament as making it necessary for the government to be able to act in the absence of legislative approval.

Member States of the European Union must protect personal data pursuant to the General Data Protection Regulation, and communications data, such as mobile location, pursuant to the ePrivacy Directive. The chair of the European Data Protection Board has observed that the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security. But if those measures allow for the processing of non-anonymized location data from mobile devices, individuals must have safeguards such as a right to a judicial remedy. “Invasive measures, such as the ‘tracking’ of individuals (i.e. processing of historical non-anonymized location data) could be considered proportional under exceptional circumstances and depending on the concrete modalities of the processing.” The EDPB has announced it will prioritize guidance on these issues.

EU Member States are already implementing such public security measures. For example, the government of Poland has by statute required everyone under a quarantine order due to suspected infection to download the “Home Quarantine” smartphone app. Those who do not install and use the app are subject to a fine. The app verifies users’ compliance with quarantine through selfies and GPS data. Users’ personal data will be administered by the Minister of Digitization, who has appointed a data protection officer. Each user’s identification, name, telephone number, quarantine location and quarantine end date can be shared with police and other government agencies. After two weeks, if the user does not report symptoms of Covid-19, the account will be deactivated — but the data will be stored for six years. The Ministry of Digitization claims that it must store the data for six years in case users pursue claims against the government. However, local privacy expert and Panoptykon Foundation cofounder Katarzyna Szymielewicz has questioned this rationale.

Even other countries that are part of the Anglo-American legal tradition are ramping up their use of data and working with the private sector to do so. The UK’s National Health Service is developing a data store that will include online/call center data from NHS Digital and Covid-19 test result data from the public health agency. While the NHS is working with private partner organizations and companies including Microsoft, Palantir Technologies, Amazon Web Services and Google, it has promised to keep all the data under its control, and to require those partners to destroy or return the data “once the public health emergency situation has ended.” The NHS also has committed to meet the requirements of data protection legislation by ensuring that individuals cannot be re-identified from the data in the data store.

Notably, each of the companies partnering with the NHS at one time or another has been subjected to scrutiny for its privacy practices. Some observers have noted that tech companies, which have been roundly criticized for a variety of reasons in recent years, may seek to use this pandemic for “reputation laundering.” As one observer cautioned: “Reputations matter, and there’s no reason the government or citizens should cast bad reputations aside when choosing who to work with or what to share” during this public health crisis.

At home

In the U.S., the federal government last enforced large-scale isolation and quarantine measures during the influenza (“Spanish Flu”) pandemic a century ago. But the Centers for Disease Control and Prevention track diseases on a daily basis by receiving case notifications from every state. The states mandate that healthcare providers and laboratories report certain diseases to the local public health authorities using personal identifiers. In other words, if you test positive for coronavirus, the government will know. Every state has laws authorizing quarantine and isolation, usually through the state’s health authority, while the CDC has authority through the federal Public Health Service Act and a series of presidential executive orders to exercise quarantine and isolation powers for specific diseases, including severe acute respiratory syndromes (a category into which the novel coronavirus falls).

Now local governments are issuing orders that empower law enforcement to fine and jail Americans for failing to practice social distancing. State and local governments have begun arresting and charging people who violate orders against congregating in groups. Rhode Island is requiring every non-resident who enters the state to be quarantined for two weeks, with police checks at the state’s transportation hubs and borders.

How governments discover violations of quarantine and social distancing orders will raise privacy concerns. Police have long been able to enforce based on direct observation of violations. But if law enforcement authorities identify violations of such orders based on data collection rather than direct observation, the Fourth Amendment may be implicated. In Jones and Carpenter, the Supreme Court has limited the warrantless tracking of Americans through GPS devices placed on their cars and through cellphone data. But building on the longstanding practice of contact tracing in fighting infectious diseases such as tuberculosis, GPS data has proven helpful in fighting the spread of Covid-19. This same data, though, also could be used to piece together evidence of violations of stay-at-home orders. As Chief Justice John Roberts wrote in Carpenter, “With access to [cell-site location information], the government can now travel back in time to retrace a person’s whereabouts… Whoever the suspect turns out to be, he has effectively been tailed every moment of every day for five years.”

The Fourth Amendment protects American citizens from government action, but the “reasonable expectation of privacy” test applied in Fourth Amendment cases connects the arenas of government action and commercial data collection. As Professor Paul Ohm of the Georgetown University Law Center notes, “the dramatic expansion of technologically-fueled corporate surveillance of our private lives automatically expands police surveillance too, thanks to the way the Supreme Court has construed the reasonable expectation of privacy test and the third-party doctrine.”

For example, the COVID-19 Mobility Data Network – infectious disease epidemiologists working with Facebook, Camber Systems and Cubiq – uses mobile device data to inform state and local governments about whether social distancing orders are effective. The tech companies give the researchers aggregated data sets; the researchers give daily situation reports to departments of health, but say they do not share the underlying data sets with governments. The researchers have justified this model based on users of the private companies’ apps having consented to the collection and sharing of data.

However, the assumption that consumers have given informed consent to the collection of their data (particularly for the purpose of monitoring their compliance with social isolation measures during a pandemic) is undermined by studies showing the average consumer does not understand all the different types of data that are collected and how their information is analyzed and shared with third parties – including governments. Technology and telecommunications companies have neither asked me to opt into tracking for public health nor made clear how they are partnering with federal, state and local governments. This practice highlights that data will be divulged in ways consumers cannot imagine – because no one assumed a pandemic when agreeing to a company’s privacy policy. This information asymmetry is part of why we need federal privacy legislation.

On Friday afternoon, Apple and Google announced their opt-in Covid-19 contact tracing technology. The owners of the two most common mobile phone operating systems in the U.S. said that in May they would release application programming interfaces that enable interoperability between iOS and Android devices using official contact tracing apps from public health authorities. At an unspecified date, Bluetooth-based contact tracing will be built directly into the operating systems. “Privacy, transparency, and consent are of utmost importance in this effort,” the companies said in their press release.  

At this early stage, we do not yet know exactly how the proposed Google/Apple contact tracing system will operate. It sounds similar to Singapore’s TraceTogether, which is already available in the iOS and Android mobile app stores (it has a 3.3 out of 5 average rating in the former and a 4.0 out of 5 in the latter). TraceTogether is also described as a voluntary, Bluetooth-based system that avoids GPS location data, does not upload information without the user’s consent, and uses changing, encrypted identifiers to maintain user anonymity. Perhaps the most striking difference, at least to a non-technical observer, is that TraceTogether was developed and is run by the Singaporean government, which has been a point of concern for some observers. The U.S. version – like finding abducted children through Amber Alerts and fighting crime via Amazon Ring – will be a partnership between the public and private sectors.     

Recommendations

The global pandemic we now face is driving data usage in ways not contemplated by consumers. Entities in the private and public sector are confronting new and complex choices about data collection, usage and sharing. Organizations with Chief Privacy Officers, Chief Information Security Officers, and other personnel tasked with managing privacy programs are, relatively speaking, well-equipped to address these issues. Despite the extraordinary circumstances, senior management should continue to rely on the expertise and sound counsel of their CPOs and CISOs, who should continue to make decisions based on their established privacy and data security programs. Although developments are unfolding at warp speed, it is important – arguably now, more than ever – to be intentional about privacy decisions.

For organizations that lack experience with privacy and data security programs (and individuals tasked with oversight for these areas), now is a great time to pause, do some research and exercise care. It is essential to think about the longer-term ramifications of choices made about data collection, use and sharing during the pandemic. The FTC offers easily accessible resources, including Protecting Personal Information: A Guide for Business, Start with Security: A Guide for Business, and Stick with Security: A Business Blog Series. While the Gramm-Leach-Bliley Act (GLB) applies only to financial institutions, the FTC’s GLB compliance blog outlines some data security best practices that apply more broadly. The National Institute for Standards and Technology (NIST) also offers security and privacy resources, including a privacy framework to help organizations identify and manage privacy risks. Private organizations such as the Center for Information Policy Leadership, the International Association of Privacy Professionals and the App Association also offer helpful resources, as do trade associations. While it may seem like a suboptimal time to take a step back and focus on these strategic issues, remember that privacy and data security missteps can cause irrevocable harm. Counterintuitively, now is actually the best time to be intentional about choices in these areas.

Best practices like accountability, risk assessment and risk management will be key to navigating today’s challenges. Companies should take the time to assess and document the new and/or expanded risks from the data collection, use and sharing of personal information. It is appropriate for these risk assessments to incorporate potential benefits and harms not only to the individual and the company, but for society as a whole. Upfront assessments can help companies establish controls and incentives to facilitate responsible behavior, as well as help organizations demonstrate that they are fully aware of the impact of their choices (risk assessment) and in control of their impact on people and programs (risk mitigation). Written assessments can also facilitate transparency with stakeholders, raise awareness internally about policy choices and assist companies with ongoing monitoring and enforcement. Moreover, these assessments will facilitate a return to “normal” data practices when the crisis has passed.  

In a similar vein, companies must engage in comprehensive vendor management with respect to the entities that are proposing to use and analyze their data. In addition to vetting proposed data recipients thoroughly, companies must be selective concerning the categories of information shared. The benefits of the proposed research must be balanced against individual protections, and companies should share only those data necessary to achieve the stated goals. To the extent feasible, data should be shared in de-identified and aggregated formats and data recipients should be subject to contractual obligations prohibiting them from re-identification. Moreover, companies must have policies in place to ensure compliance with research contracts, including data deletion obligations and prohibitions on data re-identification, where appropriate. Finally, companies must implement mechanisms to monitor third party compliance with contractual obligations.

Similar principles of necessity and proportionality should guide governments as they make demands or requests for information from the private sector. Governments must recognize the weight with which they speak during this crisis and carefully balance data collection and usage with civil liberties. In addition, governments also have special obligations to ensure that any data collection done by them or at their behest is driven by the science of Covid-19; to be transparent with citizens about the use of data; and to provide due process for those who wish to challenge limitations on their rights. Finally, government actors should apply good data hygiene, including regularly reassessing the breadth of their data collection initiatives and incorporating data retention and deletion policies. 

In theory, government’s role could be reduced as market-driven responses emerge. For example, assuming the existence of universally accessible daily coronavirus testing with accurate results even during the incubation period, Hal Singer’s proposal for self-certification of non-infection among private actors is intriguing. Thom Lambert identified the inability to know who is infected as a “lemon problem;” Singer seeks a way for strangers to verify each other’s “quality” in the form of non-infection.

Whatever solutions we may accept in a pandemic, it is imperative to monitor the coronavirus situation as it improves, to know when to lift the more dire measures. Former Food and Drug Administration Commissioner Scott Gottlieb and other observers have called for maintaining surveillance because of concerns about a resurgence of the virus later this year. For any measures that conflict with Americans’ constitutional rights to privacy and freedom of movement, there should be metrics set in advance for the conditions that will indicate when such measures are no longer justified. In the absence of pre-determined metrics, governments may feel the same temptation as Hungary’s prime minister to keep renewing a “state of danger” that overrides citizens’ rights. As Slovak lawmaker Tomas Valasek has said, “It doesn’t just take the despots and the illiberals of this world, like Orbán, to wreak damage.” But privacy is not merely instrumental to other interests, and we do not have to sacrifice our right to it indefinitely in exchange for safety.

I recognize that halting the spread of the virus will require extensive and sustained effort, and I credit many governments with good intentions in attempting to save the lives of their citizens. But I refuse to accept that we must sacrifice privacy to reopen the economy. It seems a false choice to say that I must sacrifice my Constitutional rights to privacy, freedom of association and free exercise of religion for another’s freedom of movement. Society should demand that equity, fairness and autonomy be respected in data uses, even in a pandemic. To quote Valasek again: “We need to make sure that we don’t go a single inch further than absolutely necessary in curtailing civil liberties in the name of fighting for public health.” History has taught us repeatedly that sweeping security powers granted to governments during an emergency persist long after the crisis has abated. To resist the gathering momentum toward this outcome, I will continue to emphasize the FTC’s learning on appropriate data collection and use. But my remit as an FTC Commissioner is even broader – when I was sworn in on Sept. 26, 2018, I took an oath to “support and defend the Constitution of the United States” – and so I shall.


[1] Many thanks to my Attorney Advisors Pallavi Guniganti and Nina Frant for their invaluable assistance in preparing this article.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Jane Bambauer, (Professor of Law, University of Arizona James E. Rogers College of Law]

The importance of testing and contact tracing to slow the spread of the novel coronavirus and resume normal life is now well established. The difference between the communities that do it and the ones that don’t is disturbingly grim (see, e.g., South Korea versus Italy). In a large population like the U.S., contact tracing and alerts will have to be done in an automated way with the help of mobile service providers’ geolocation data. The intensive use of data in South Korea has led many commenters to claim that the strategy that’s been so effective there cannot be replicated in western countries with strong privacy laws.

Descriptively, it’s probably true that privacy law and instincts in the US and EU will hinder virus surveillance.

The European Commission’s recent guidance on GDPR’s application to the COVID-19 crisis left a hurdle for member states. EU countries would have to introduce new legislation in order to use telecommunications data to do contact tracing, and the legislation would be reviewable by the European Court of Human Rights. No member states have done this, even though nearly all of them have instituted lock-down measures. 

Even Germany, which has announced the rollout of a cellphone tracking and alert app has decided to make the use of the app voluntary. This system will only be effective if enough people opt into it. (One study suggests the minimum participation rate would have to be “near universal,” so this does not bode well.)

And in the U.S., privacy advocacy groups like EPIC are already gearing up to challenge the collection of cellphone data by federal and state governments based on recent Fourth Amendment precedent finding that individuals have a reasonable expectation of privacy in cell phone location data.

And nearly every opinion piece I read from public health experts promoting contact tracing ends with some obligatory handwringing about the privacy and ethical implications. Research universities and units of government that are comfortable advocating for draconian measures of social distancing and isolation find it necessary to stall and consult their IRBs and privacy officers before pursuing options that involve data surveillance.

While ethicists and privacy scholars certainly have something to teach regulators during a pandemic, the Coronavirus has something to teach us in return. It has thrown harsh light on the drawbacks and absurdities of rigid individual control over personal data.

Objections to surveillance lose their moral and logical bearings when the alternatives are out-of-control disease or mass lockdowns. Compared to those, mass surveillance is the most liberty-preserving option. Thus, instead of reflexively trotting out privacy and ethics arguments, we should take the opportunity to understand the order of operations—to know which rights and liberties are more vital than privacy so that we know when and why expectations in privacy need to bend. All but the most privacy-sensitive would count health and the liberty to leave one’s house among the most basic human interests, so the COVID-19 lockdowns are testing some of the practices and assumptions that are baked into our privacy laws.

At the highest level of abstraction, the pandemic should remind us that privacy is, ultimately, an instrumental right. It is meant to achieve certain social goals in fairness, safety, and autonomy. It is not an end in itself.  

When privacy is cloaked in the language of fundamental human rights, its instrumental function is obscured. Like other liberties in movement and commerce, conceiving of privacy as something that is under each individual’s control is a useful rule-of-thumb when it doesn’t conflict too much with other people’s interests. But the COVID-19 crisis shows that there are circumstances under which privacy as an individual right frustrates the very values in fairness, autonomy, and physical security that it is supposed to support. Privacy authorities and experts at every level need to be as clear and blunt as the experts supporting mass lockdowns: the government can do this, it will have to rely on industry, and we will work through the fallout and secondary problems when people stop dying.

At a minimum epidemiologists and cellphone service providers should be able to rely on implied consent to data-sharing, just as the tort system allows doctors to presume consent for emergency surgery when a patient’s wishes cannot be observed in time. Geoffrey Manne suggested this in an earlier TOTM post about the allocation of information and medical resources:

But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.

Indeed, we should go further than this. There is a moral imperative to ignore even express lack of consent when withholding important information that puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy (e.g. New York’s requirement that doctors notify their patient’s partners about a positive HIV test if their patient fails to do so), this same logic applies at scale to the collection and analysis of data during a pandemic.

Another reason consent is inappropriate at this time is that it mars quantitative studies with selection bias. Medical reporting on the transmission and mortality of COVID-19 has had to rely much too heavily on data coming out of the Diamond Princess cruise ship because for a long time it was the only random sample—the only time that everybody was screened. 

The United States has done a particularly poor job tracking the spread of the virus because faced with a shortage of tests, the CDC compounded our problems by denying those tests to anybody that didn’t meet specific criteria (a set of symptoms and either recent travel or known exposure to a confirmed case.) These criteria all but guaranteed that our data would suggest coughs and fevers are necessary conditions for coronavirus, and it delayed our recognition of community spread. If we are able to do antibody testing in the near future to understand who has had the virus in the past, that data would be most useful over swaths of people who have not self-selected into a testing facility.

If consent is not an appropriate concept for privacy during a pandemic, might there be a defect in its theory even outside of crisis time? I have argued in the past that privacy should be understood as a collective interest in risk management, like negligence law, rather than a property-style right. The public health response to COVID-19 helps illustrate why this is so. The right to privacy is different from other liberties because it directly conflicts with another fundamental right: namely, the right to access information and knowledge. One person’s objection to contact tracing (or any other collection and distribution of data) necessarily conflicts with another’s interest in knowing who was in that person’s proximity during a critical period.

This puts privacy on very different footing from other rights, like the right to free movement. Generally, my right to travel in public space does not have to interfere with other people’s rights. It may interfere if, for example, I drive on the wrong side of the street, but the conflict is not inevitable. With a few restrictions and rules of coordination, there is ample opportunity for people to enjoy public spaces the way they want without forcing policymakers to decide between competing uses. Thus, when we suspend the right to free movement in unusual times like today, when one person’s movement in public space does cause significant detriment to others, we can have confidence that the liberty can be restored when the threat has subsided.

Privacy, by contrast, is inevitably at odds with a demonstrable desire by another person or firm to access information that they find valuable. Perhaps this is the reason that ethicists and regulators find it difficult to overcome privacy objections: when public health experts insist that privacy is conflicting with valuable information flows, a privacy advocate can say “yes, exactly.”

We can improve on the theoretical underpinnings of privacy law by embracing the fact that privacy is instrumental—a means (sometimes an effective one) to achieve other ends. If we are trying to achieve certain goals through its use—goals in equity, fairness, and autonomy—we should increase our effort to understand what types of uses of data implicate those outcomes. Fortunately, that work is already advancing at a fast clip in debates about socially responsible AI.The next step would be to assess whether individual control tends to support the good uses and reduce the bad uses. If our policies can ensure that machine learning applications are sufficiently “fair,” and if we can agree on what fairness entails, lawmakers can begin the fruitful and necessary work of shifting privacy law away from prohibitions on data collection and sharing and toward limits on its use in the areas where individual control is counter-productive.

Since the LabMD decision, in which the Eleventh Circuit Court of Appeals told the FTC that its orders were unconstitutionally vague, the FTC has been put on notice that it needs to reconsider how it develops and substantiates its claims in data security enforcement actions brought under Section 5. 

Thus, on January 6, the FTC announced on its blog that it will have “New and improved FTC data security orders: Better guidance for companies, better protection for consumers.” However, the changes the Commission highlights only get to a small part of what we have previously criticized when it comes to their “common law” of data security (see here and here). 

While the new orders do list more specific requirements to help explain what the FTC believes is a “comprehensive data security program”, there is still no legal analysis in either the orders or the complaints that would give companies fair notice of what the law requires. Furthermore, nothing about the underlying FTC process has changed, which means there is still enormous pressure for companies to settle rather than litigate the contours of what “reasonable” data security practices look like. Thus, despite the Commission’s optimism, the recent orders and complaints do little to nothing to remedy the problems that plague the Commission’s data security enforcement program.

The changes

In his blog post, the director of the Bureau of Consumer Protection at the FTC describes how new orders in data security enforcement actions are more specific, with one of the main goals being more guidance to businesses trying to follow the law.

Since the early 2000s, our data security orders had contained fairly standard language. For example, these orders typically required a company to implement a comprehensive information security program subject to a biennial outside assessment. As part of the FTC’s Hearings on Competition and Consumer Protection in the 21st Century, we held a hearing in December 2018 that specifically considered how we might improve our data security orders. We were also mindful of the 11th Circuit’s 2018 LabMD decision, which struck down an FTC data security order as unenforceably vague.

Based on this learning, in 2019 the FTC made significant improvements to its data security orders. These improvements are reflected in seven orders announced this year against an array of diverse companies: ClixSense (pay-to-click survey company), i-Dressup (online games for kids), DealerBuilt (car dealer software provider), D-Link (Internet-connected routers and cameras), Equifax (credit bureau), Retina-X (monitoring app), and Infotrax (service provider for multilevel marketers)…

[T]he orders are more specific. They continue to require that the company implement a comprehensive, process-based data security program, and they require the company to implement specific safeguards to address the problems alleged in the complaint. Examples have included yearly employee training, access controls, monitoring systems for data security incidents, patch management systems, and encryption. These requirements not only make the FTC’s expectations clearer to companies, but also improve order enforceability.

Why the FTC’s data security enforcement regime fails to provide fair notice or develop law (and is not like the common law)

While these changes are long overdue, it is just one step in the direction of a much-needed process reform at the FTC in how it prosecutes cases with its unfairness authority, particularly in the realm of data security. It’s helpful to understand exactly why the historical failures of the FTC process are problematic in order to understand why the changes it is undertaking are insufficient.

For instance, Geoffrey Manne and I previously highlighted  the various ways the FTC’s data security consent order regime fails in comparison with the common law: 

In Lord Mansfield’s characterization, “the common law ‘does not consist of particular cases, but of general principles, which are illustrated and explained by those cases.’” Further, the common law is evolutionary in nature, with the outcome of each particular case depending substantially on the precedent laid down in previous cases. The common law thus emerges through the accretion of marginal glosses on general rules, dictated by new circumstances. 

The common law arguably leads to legal rules with at least two substantial benefits—efficiency and predictability or certainty. The repeated adjudication of inefficient or otherwise suboptimal rules results in a system that generally offers marginal improvements to the law. The incentives of parties bringing cases generally means “hard cases,” and thus judicial decisions that have to define both what facts and circumstances violate the law and what facts and circumstances don’t. Thus, a benefit of a “real” common law evolution is that it produces a body of law and analysis that actors can use to determine what conduct they can undertake without risk of liability and what they cannot. 

In the abstract, of course, the FTC’s data security process is neither evolutionary in nature nor does it produce such well-defined rules. Rather, it is a succession of wholly independent cases, without any precedent, narrow in scope, and binding only on the parties to each particular case. Moreover it is generally devoid of analysis of the causal link between conduct and liability and entirely devoid of analysis of which facts do not lead to liability. Like all regulation it tends to be static; the FTC is, after all, an enforcement agency, charged with enforcing the strictures of specific and little-changing pieces of legislation and regulation. For better or worse, much of the FTC’s data security adjudication adheres unerringly to the terms of the regulations it enforces with vanishingly little in the way of gloss or evolution. As such (and, we believe, for worse), the FTC’s process in data security cases tends to reject the ever-evolving “local knowledge” of individual actors and substitutes instead the inherently limited legislative and regulatory pronouncements of the past. 

By contrast, real common law, as a result of its case-by-case, bottom-up process, adapts to changing attributes of society over time, largely absent the knowledge and rent-seeking problems of legislatures or administrative agencies. The mechanism of constant litigation of inefficient rules allows the common law to retain a generally efficient character unmatched by legislation, regulation, or even administrative enforcement. 

Because the common law process depends on the issues selected for litigation and the effects of the decisions resulting from that litigation, both the process by which disputes come to the decision-makers’ attention, as well as (to a lesser extent, because errors will be corrected over time) the incentives and ability of the decision-maker to render welfare-enhancing decisions, determine the value of the common law process. These are decidedly problematic at the FTC.

In our analysis, we found the FTC’s process to be wanting compared to the institution of the common law. The incentives of the administrative complaint process put a relatively larger pressure on companies to settle data security actions brought by the FTC compared to private litigants. This is because the FTC can use its investigatory powers as a public enforcer to bypass the normal discovery process to which private litigants are subject, and over which independent judges have authority. 

In a private court action, plaintiffs can’t engage in discovery unless their complaint survives a motion to dismiss from the defendant. Discovery costs remain a major driver of settlements, so this important judicial review is necessary to make sure there is actually a harm present before putting those costs on defendants. 

Furthermore, the FTC can also bring cases in a Part III adjudicatory process which starts in front of an administrative law judge (ALJ) but is then appealable to the FTC itself. Former Commissioner Joshua Wright noted in 2013 that “in the past nearly twenty years… after the administrative decision was appealed to the Commission, the Commission ruled in favor of FTC staff. In other words, in 100 percent of cases where the ALJ ruled in favor of the FTC, the Commission affirmed; and in 100 percent of the cases in which the ALJ ruled against the FTC, the Commission reversed.” In other words, the FTC nearly always rules in favor of itself on appeal if the ALJ finds there is no case, as it did in LabMD. The combination of investigation costs before any complaint at all and the high likelihood of losing through several stages of litigation makes the intelligent business decision to simply agree to a consent decree.

The results of this asymmetrical process show the FTC has not really been building a common law. In all but two cases (Wyndham and LabMD), the companies who have been targeted for investigation by the FTC on data security enforcement have settled. We also noted how the FTC’s data security orders tended to be nearly identical from case-to-case, reflecting the standards of the FTC’s Safeguards Rule. Since the orders were giving nearly identical—and as LabMD found, vague—remedies in each case, it cannot be said there was a common law developing over time.  

What LabMD addressed and what it didn’t

In its decision, the Eleventh Circuit sidestepped fundamental substantive problems with the FTC’s data security practice (which we have made in both our scholarship and LabMD amicus brief) about notice or substantial injury. Instead, the court decided to assume the FTC had proven its case and focused exclusively on the remedy. 

We will assume arguendo that the Commission is correct and that LabMD’s negligent failure to design and maintain a reasonable data-security program invaded consumers’ right of privacy and thus constituted an unfair act or practice.

What the Eleventh Circuit did address, though, was that the remedies the FTC had been routinely applying to businesses through its data enforcement actions lacked the necessary specificity in order to be enforceable through injunctions or cease and desist orders.

In the case at hand, the cease and desist order contains no prohibitions. It does not instruct LabMD to stop committing a specific act or practice. Rather, it commands LabMD to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness. This command is unenforceable. Its unenforceability is made clear if we imagine what would take place if the Commission sought the order’s enforcement…

The Commission moves the district court for an order requiring LabMD to show cause why it should not be held in contempt for violating the following injunctive provision:

[T]he respondent shall … establish and implement, and thereafter maintain, a comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers…. Such program… shall contain administrative, technical, and physical safeguards appropriate to respondent’s size and complexity, the nature and scope of respondent’s activities, and the sensitivity of the personal information collected from or about consumers….

The Commission’s motion alleges that LabMD’s program failed to implement “x” and is therefore not “reasonably designed.” The court concludes that the Commission’s alleged failure is within the provision’s language and orders LabMD to show cause why it should not be held in contempt.

At the show cause hearing, LabMD calls an expert who testifies that the data-security program LabMD implemented complies with the injunctive provision at issue. The expert testifies that “x” is not a necessary component of a reasonably designed data-security program. The Commission, in response, calls an expert who disagrees. At this point, the district court undertakes to determine which of the two equally qualified experts correctly read the injunctive provision. Nothing in the provision, however, indicates which expert is correct. The provision contains no mention of “x” and is devoid of any meaningful standard informing the court of what constitutes a “reasonably designed” data-security program. The court therefore has no choice but to conclude that the Commission has not proven — and indeed cannot prove — LabMD’s alleged violation by clear and convincing evidence.

In other words, the Eleventh Circuit found that an order requiring a reasonable data security program is not specific enough to make it enforceable. This leaves questions as to whether the FTC’s requirement of a “reasonable data security program” is specific enough to survive a motion to dismiss and/or a fair notice challenge going forward.

Under the Federal Rules of Civil Procedure, a plaintiff must provide “a short and plain statement . . . showing that the pleader is entitled to relief,” Fed. R. Civ. P. 8(a)(2), including “enough facts to state a claim . . . that is plausible on its face.” Bell Atl. Corp. v. Twombly, 550 U.S. 544, 570 (2007). “[T]hreadbare recitals of the elements of a cause of action, supported by mere conclusory statements” will not suffice. Ashcroft v. Iqbal, 556 U.S. 662, 678 (2009). In FTC v. D-Link, for instance, the Northern District of California dismissed the unfairness claims because the FTC did not sufficiently plead injury. 

[T]hey make out a mere possibility of injury at best. The FTC does not identify a single incident where a consumer’s financial, medical or other sensitive personal information has been accessed, exposed or misused in any way, or whose IP camera has been compromised by unauthorized parties, or who has suffered any harm or even simple annoyance and inconvenience from the alleged security flaws in the DLS devices. The absence of any concrete facts makes it just as possible that DLS’s devices are not likely to substantially harm consumers, and the FTC cannot rely on wholly conclusory allegations about potential injury to tilt the balance in its favor. 

The fair notice question wasn’t reached in LabMD, though it was in FTC v. Wyndham. But the Third Circuit did not analyze the FTC’s data security regime under the “ascertainable certainty” standard applied to agency interpretation of a statute.

Wyndham’s position is unmistakable: the FTC has not yet declared that cybersecurity practices can be unfair; there is no relevant FTC rule, adjudication or document that merits deference; and the FTC is asking the federal courts to interpret § 45(a) in the first instance to decide whether it prohibits the alleged conduct here. The implication of this position is similarly clear: if the federal courts are to decide whether Wyndham’s conduct was unfair in the first instance under the statute without deferring to any FTC interpretation, then this case involves ordinary judicial interpretation of a civil statute, and the ascertainable certainty standard does not apply. The relevant question is not whether Wyndham had fair notice of the FTC’s interpretation of the statute, but whether Wyndham had fair notice of what the statute itself requires.

In other words, Wyndham boxed itself into a corner arguing that they did not have fair notice that the FTC could bring a data security enforcement action against the under Section 5 unfairness. LabMD, on the other hand, argued they did not have fair notice as to how the FTC would enforce its data security standards. Cf. ICLE-Techfreedom Amicus Brief at 19. The Third Circuit even suggested that under an “ascertainable certainty” standard, the FTC failed to provide fair notice: “we agree with Wyndham that the guidebook could not, on its own, provide ‘ascertainable certainty’ of the FTC’s interpretation of what specific cybersecurity practices fail § 45(n).” Wyndham, 799 F.3d at 256 n.21

Most importantly, the Eleventh Circuit did not actually get to the issue of whether LabMD actually violated the law under the factual record developed in the case. This means there is still no caselaw (aside from the ALJ decision in this case) which would allow a company to learn what is and what is not reasonable data security, or what counts as a substantial injury for the purposes of Section 5 unfairness in data security cases. 

How FTC’s changes fundamentally fail to address its failures of process

The FTC’s new approach to its orders is billed as directly responsive to what the Eleventh Circuit did reach in the LabMD decision, but it leaves so much of what makes the process insufficient in place.

First, it is notable that while the FTC highlights changes to its orders, there is still a lack of legal analysis in the orders that would allow a company to accurately predict whether its data security practices are enough under the law. A listing of what specific companies under consent orders are required to do is helpful. But these consent decrees do not require companies to admit liability or contain anything close to the reasoning that accompanies court opinions or normal agency guidance on complying with the law. 

For instance, the general formulation in these 2019 orders is that the company must “establish, implement, and maintain a comprehensive information/software security program that is designed to protect the security, confidentiality, and integrity of such personal information. To satisfy this requirement, Respondent/Defendant must, at a minimum…” (emphasis added), followed by a list of pretty similar requirements with variation depending on the business. Even if a company does all of the listed requirements but a breach occurs, the FTC is not obligated to find the data security program was legally sufficient. There is no safe harbor or presumptive reasonableness that attaches even for the business subject to the order, nonetheless companies looking for guidance. 

While the FTC does now require more specific things, like “yearly employee training, access controls, monitoring systems for data security incidents, patch management systems, and encryption,” there is still no analysis on how to meet the standard of reasonableness the FTC relies upon. In other words, it is not clear that this new approach to orders does anything to increase fair notice to companies as to what the FTC requires under Section 5 unfairness.

Second, nothing about the underlying process has really changed. The FTC can still investigate and prosecute cases through administrative law courts with itself as initial court of appeal. This makes the FTC the police, prosecutor, and judge in its own case. In the case of LabMD, who actually won after many appeals, this process ended in bankruptcy. It is no surprise that since the LabMD decision, each of the FTC’s data security enforcement cases have been settled with consent orders, just as they were before the Eleventh Circuit opinion. 

Unfortunately, if the FTC really wants to evolve its data security process like the common law, it needs to engage in an actual common law process. Without caselaw on the facts necessary to establish substantial injury, “unreasonable” data security practices, and causation, there will continue to be more questions than answers about what the law requires. And without changes to the process, the FTC will continue to be able to strong-arm companies into consent decrees.

Today, I filed a regulatory comment in the FTC’s COPPA Rule Review on behalf of the International Center for Law & Economics. Building on prior work, I argue the FTC’s 2013 amendments to the COPPA Rule should be repealed. 

The amendments ignored the purpose of COPPA by focusing on protecting children from online targeted advertising rather than protecting children from online predators, as the drafters had intended. The amendment to the definition of personal information to include “persistent identifiers” by themselves is inconsistent with the statute’s text. The legislative history is explicit in identifying the protection of children from online predators as a purpose of COPPA, but there is nothing in the statute or the legislative history that states a purpose is to protect children from online targeted advertising.

The YouTube enforcement action and the resulting compliance efforts by YouTube will make the monetization of children-friendly content very difficult. Video game creators, family vloggers, toy reviewers, children’s apps, and educational technology will all be implicated by the changes on YouTube’s platform. The economic consequences are easy to predict: there will likely be less zero-priced family-friendly content available.

The 2013 amendments have uncertain benefits to children’s privacy. While some may feel there is a benefit to having less targeted advertising towards children, there is also a cost in restricting the ability of children’s content creators to monetize their work. The FTC should not presume parents do not balance costs and benefits about protecting their children from targeted advertising and often choose to allow their kids to use YouTube and apps on devices they bought for them.

The full comments are here.