Archives For data collection

Having earlier passed through subcommittee, the American Data Privacy and Protection Act (ADPPA) has now been cleared for floor consideration by the U.S. House Energy and Commerce Committee. Before the markup, we noted that the ADPPA mimics some of the worst flaws found in the European Union’s General Data Protection Regulation (GDPR), while creating new problems that the GDPR had avoided. Alas, the amended version of the legislation approved by the committee not only failed to correct those flaws, but in some cases it actually undid some of the welcome corrections that had been made to made to the original discussion draft.

Is Targeted Advertising ‘Strictly Necessary’?

The ADPPA’s original discussion draft classified “information identifying an individual’s online activities over time or across third party websites” in the broader category of “sensitive covered data,” for which a consumer’s expression of affirmative consent (“cookie consent”) would be required to collect or process. Perhaps noticing the questionable utility of such a rule, the bill’s sponsors removed “individual’s online activities” from the definition of “sensitive covered data” in the version of ADPPA that was ultimately introduced.

The manager’s amendment from Energy and Commerce Committee Chairman Frank Pallone (D-N.J.) reverted that change and “individual’s online activities” are once again deemed to be “sensitive covered data.” However, the marked-up version of the ADPPA doesn’t require express consent to collect sensitive covered data. In fact, it seems not to consider the possibility of user consent; firms will instead be asked to prove that their collection of sensitive data was a “strict necessity.”

The new rule for sensitive data—in Section 102(2)—is that collecting or processing such data is allowed “where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated” in Section 101(b) (though with exceptions—notably for first-party advertising and targeted advertising).

This raises the question of whether, e.g., the use of targeted advertising based on a user’s online activities is “strictly necessary” to provide or maintain Facebook’s social network? Even if the courts eventually decide, in some cases, that it is necessary, we can expect a good deal of litigation on this point. This litigation risk will impose significant burdens on providers of ad-supported online services. Moreover, it would effectively invite judges to make business decisions, a role for which they are profoundly ill-suited.

Given that the ADPPA includes the “right to opt-out of targeted advertising”—in Section 204(c)) and a special targeted advertising “permissible purpose” in Section 101(b)(17)—this implies that it must be possible for businesses to engage in targeted advertising. And if it is possible, then collecting and processing the information needed for targeted advertising—including information on an “individual’s online activities,” e.g., unique identifiers – Section 2(39)—must be capable of being “strictly necessary to provide or maintain a specific product or service requested by the individual.” (Alternatively, it could have been strictly necessary for one of the other permissible purposes from Section 101(b), but none of them appear to apply to collecting data for the purpose of targeted advertising).

The ADPPA itself thus provides for the possibility of targeted advertising. Therefore, there should be no reason for legal ambiguity about when collecting “individual’s online activities” is “strictly necessary to provide or maintain a specific product or service requested by the individual.” Do we want judges or other government officials to decide which ad-supported services “strictly” require targeted advertising? Choosing business models for private enterprises is hardly an appropriate role for the government. The easiest way out of this conundrum would be simply to revert back to the ill-considered extension of “sensitive covered data” in the ADPPA version that was initially introduced.

Developing New Products and Services

As noted previously, the original ADPPA discussion draft allowed first-party use of personal data to “provide or maintain a specific product or service requested by an individual” (Section 101(a)(1)). What about using the data to develop new products and services? Can a business even request user consent for that? Under the GDPR, that is possible. Under the ADPPA, it may not be.

The general limitation on data use (“provide or maintain a specific product or service requested by an individual”) was retained from the ADPPA original discussion in the version approved by the committee. As originally introduced, the bill included an exception that could have partially addressed the concern in Section 101(b)(2) (emphasis added):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to perform system maintenance or diagnostics, to maintain a product or service for which such data was collected, to conduct internal research or analytics, to improve a product or service for which such data was collected …

Arguably, developing new products and services largely involves “internal research or analytics,” which would be covered under this exception. If the business later wanted to invite users of an old service to use a new service, the business could contact them based on a separate exception for first-party marketing and advertising (Section 101(b)(11) of the introduced bill).

This welcome development was reversed in the manager’s amendment. The new text of the exception (now Section 101(b)(2)(C)) is narrower in a key way (emphasis added): “to conduct internal research or analytics to improve a product or service for which such data was collected.” Hence, it still looks like businesses will find it difficult to use first-party data to develop new products or services.

‘De-Identified Data’ Remains Unclear

Our earlier analysis noted significant confusion in the ADPPA’s concept of “de-identified data.” Neither the introduced version nor the markup amendments addressed those concerns, so it seems worthwhile to repeat and update the criticism here. The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. In the marked-up version, the definition covers: “information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to nonpersonal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” in Section 2(12) of the marked-up version:

  1. To take “reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual”;
  2. To publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and
    1. to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual;”
  3. To “contractually obligate[] any person or entity that receives the information from the covered entity or service provider” to comply with all of the same rules and to include such an obligation “in all subsequent instances for which the data may be received.”

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here, but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification.

Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is to effectively impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Fundamental Issues with Enforcement

One of the most important problems with the ADPPA is its enforcement provisions. Most notably, the private right of action creates pernicious incentives for excessive litigation by providing for both compensatory damages and open-ended injunctive relief. Small businesses have a right to cure before damages can be sought, but many larger firms are not given a similar entitlement. Given such open-ended provisions as whether using web-browsing behavior is “strictly necessary” to improve a product or service, the litigation incentives become obvious. At the very least, there should be a general opportunity to cure, particularly given the broad restrictions placed on essentially all data use.

The bill also creates multiple overlapping power centers for enforcement (as we have previously noted):

The bill carves out numerous categories of state law that would be excluded from pre-emption… as well as several specific state laws that would be explicitly excluded, including Illinois’ Genetic Information Privacy Act and elements of the California Consumer Privacy Act. These broad carve-outs practically ensure that ADPPA will not create a uniform and workable system, and could potentially render the entire pre-emption section a dead letter. As written, it offers the worst of both worlds: a very strict federal baseline that also permits states to experiment with additional data-privacy laws.

Unfortunately, the marked-up version appears to double down on these problems. For example, the bill pre-empts the Federal Communication Commission (FCC) from enforcing sections 222, 338(i), and 631 of the Communications Act, which pertain to privacy and data security. An amendment was offered that would have pre-empted the FCC from enforcing any provisions of the Communications Act (e.g., sections 201 and 202) for data-security and privacy purposes, but it was withdrawn. Keeping two federal regulators on the beat for a single subject area creates an inefficient regime. The FCC should be completely pre-empted from regulating privacy issues for covered entities.

The amended bill also includes an ambiguous provision that appears to serve as a partial carveout for enforcement by the California Privacy Protection Agency (CCPA). Some members of the California delegation—notably, committee members Anna Eshoo and Doris Matsui (both D-Calif.)—have expressed concern that the bill would pre-empt California’s own California Privacy Rights Act. A proposed amendment by Eshoo to clarify that the bill was merely a federal “floor” and that state laws may go beyond ADPPA’s requirements failed in a 48-8 roll call vote. However, the marked-up version of the legislation does explicitly specify that the CPPA “may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act.” How courts might interpret this language should the CPPA seek to enforce provisions of the CCPA that otherwise conflict with the ADPPA is unclear, thus magnifying the problem of compliance with multiple regulators.

Conclusion

As originally conceived, the basic conceptual structure of the ADPPA was, to a very significant extent, both confused and confusing. Not much, if anything, has since improved—especially in the marked-up version that regressed the ADPPA to some of the notably bad features of the original discussion draft. The rules on de-identified data are also very puzzling: their effect contradicts the basic principle of data minimization that the ADPPA purports to uphold. Those examples strongly suggest that the ADPPA is still far from being a properly considered candidate for a comprehensive federal privacy legislation.

The Federal Trade Commission (FTC) is at it again, threatening new sorts of regulatory interventions in the legitimate welfare-enhancing activities of businesses—this time in the realm of data collection by firms.

Discussion

In an April 11 speech at the International Association of Privacy Professionals’ Global Privacy Summit, FTC Chair Lina Khan set forth a litany of harms associated with companies’ data-acquisition practices. Certainly, fraud and deception with respect to the use of personal data has the potential to cause serious harm to consumers and is the legitimate target of FTC enforcement activity. At the same time, the FTC should take into account the substantial benefits that private-sector data collection may bestow on the public (see, for example, here, here, and here) in order to formulate economically beneficial law-enforcement protocols.

Chair Khan’s speech, however, paid virtually no attention to the beneficial side of data collection. To the contrary, after highlighting specific harmful data practices, Khan then waxed philosophical in condemning private data-collection activities (citations omitted):

Beyond these specific harms, the data practices of today’s surveillance economy can create and exacerbate deep asymmetries of information—exacerbating, in turn, imbalances of power. As numerous scholars have noted, businesses’ access to and control over such vast troves of granular data on individuals can give those firms enormous power to predict, influence, and control human behavior. In other words, what’s at stake with these business practices is not just one’s subjective preference for privacy, but—over the long term—one’s freedom, dignity, and equal participation in our economy and society.

Even if one accepts that private-sector data practices have such transcendent social implications, are the FTC’s philosopher kings ideally equipped to devise optimal policies that promote “freedom, dignity, and equal participation in our economy and society”? Color me skeptical. (Indeed, one could argue that the true transcendent threat to society from fast-growing growing data collection comes not from businesses but, rather, from the government, which unlike private businesses holds a legal monopoly on the right to use or authorize the use of force. This question is, however, beyond the scope of my comments.)

Chair Khan turned from these highfalutin musings to a more prosaic and practical description of her plans for “adapting the commission’s existing authority to address and rectify unlawful data practices.” She stressed “focusing on firms whose business practices cause widespread harm”; “assessing data practices through both a consumer protection and competition lens”; and “designing effective remedies that are informed by the business strategies that specific markets favor and reward.” These suggestions are not inherently problematic, but they need to be fleshed out in far greater detail. For example, there are potentially major consumer-protection risks posed by applying antitrust to “big data” problems (see here, here and here, for example).

Khan ended her presentation by inviting us “to consider how we might need to update our [FTC] approach further yet.” Her suggested “updates” raise significant problems.

First, she stated that the FTC “is considering initiating a rulemaking to address commercial surveillance and lax data security practices.” Even assuming such a rulemaking could withstand legal scrutiny (its best shot would be to frame it as a consumer protection rule, not a competition rule), it would pose additional serious concerns. One-size-fits-all rules prevent consideration of possible economic efficiencies associated with specific data-security and surveillance practices. Thus, some beneficial practices would be wrongly condemned. Such rules would also likely deter firms from experimenting and innovating in ways that could have led to improved practices. In both cases, consumer welfare would suffer.

Second, Khan asserted “the need to reassess the frameworks we presently use to assess unlawful conduct. Specifically, I am concerned that present market realities may render the ‘notice and consent’ paradigm outdated and insufficient.” Accordingly, she recommended that “we should approach data privacy and security protections by considering substantive limits rather than just procedural protections, which tend to create process requirements while sidestepping more fundamental questions about whether certain types of data collection should be permitted in the first place.”  

In support of this startling observation, Khan approvingly cites Daniel Solove’s article “The Myth of the Privacy Paradox,” which claims that “[t]he fact that people trade their privacy for products or services does not mean that these transactions are desirable in their current form. … [T]he mere fact that people make a tradeoff doesn’t mean that the tradeoff is fair, legitimate, or justifiable.”

Khan provides no economic justification for a data-collection ban. The implication that the FTC would consider banning certain types of otherwise legal data collection is at odds with free-market principles and would have disastrous economic consequences for both consumers and producers. It strikes at voluntary exchange, a basic principle of market economics that benefits transactors and enables markets to thrive.

Businesses monetize information provided by consumers to offer a host of goods and services that satisfy consumer interests. This is particularly true in the case of digital platforms. Preventing the voluntary transfer of data from consumers to producers based on arbitrary government concerns about “fairness” (for example) would strike at firms’ ability to monetize data and thereby generate additional consumer and producer surplus. The arbitrary destruction of such potential economic value by government fiat would be the essence of “unfairness.”

In particular, the consumer welfare benefits generated by digital platforms, which depend critically on large volumes of data, are enormous. As Erik Brynjolfsson of the Massachusetts Institute of Technology and his student Avinash Collis explained in a December 2019 article in the Harvard Business Review, such benefits far exceed those measured by conventional GDP. Online choice experiments based on digital-survey techniques enabled the authors “to estimate the consumer surplus for a great variety of goods, including free ones that are missing from GDP statistics.” Brynjolfsson and Collis found, for example, that U.S. consumers derived $231 billion in value from Facebook since its inception in 2004. Furthermore:

[O]ur estimates indicate that the [Facebook] platform generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. In contrast, average revenue per user is only around $140 per year in United States and $44 per year in Europe. In other words, Facebook operates one of the most advanced advertising platforms, yet its ad revenues represent only a fraction of the total consumer surplus it generates. This reinforces research by NYU Stern School’s Michael Spence and Stanford’s Bruce Owen that shows that advertising revenues and consumer surplus are not always correlated: People can get a lot of value from content that doesn’t generate much advertising, such as Wikipedia or email. So it is a mistake to use advertising revenues as a substitute for consumer surplus…

In a similar vein, the authors found that various user-fee-based digital services yield consumer surplus five to ten times what users paid to access them. What’s more:

The effect of consumer surplus is even stronger when you look at categories of digital goods. We conducted studies to measure it for the most popular categories in the United States and found that search is the most valued category (with a median valuation of more than $17,000 a year), followed by email and maps. These categories do not have comparable off-line substitutes, and many people consider them essential for work and everyday life. When we asked participants how much they would need to be compensated to give up an entire category of digital goods, we found that the amount was higher than the sum of the value of individual applications in it. That makes sense, since goods within a category are often substitutes for one another.

In sum, the authors found:

To put the economic contributions of digital goods in perspective, we find that including the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017. During this period, GDP rose by an average of 1.83% a year. Clearly, GDP has been substantially underestimated over that time.

Although far from definitive, this research illustrates how a digital-services model, based on voluntary data transfer and accumulation, has brought about enormous economic welfare benefits. Accordingly, FTC efforts to tamper with such a success story on abstruse philosophical grounds not only would be unwarranted, but would be economically disastrous. 

Conclusion

The FTC clearly plans to focus on “abuses” in private-sector data collection and usage. In so doing, it should hone in on those practices that impose clear harm to consumers, particularly in the areas of deception and fraud. It is not, however, the FTC’s role to restructure data-collection activities by regulatory fiat, through far-reaching inflexible rules and, worst of all, through efforts to ban collection of “inappropriate” information.

Such extreme actions would predictably impose substantial harm on consumers and producers. They would also slow innovation in platform practices and retard efficient welfare-generating business initiatives tied to the availability of broad collections of data. Eventually, the courts would likely strike down most harmful FTC data-related enforcement and regulatory initiatives, but substantial welfare losses (including harm due to a chilling effect on efficient business conduct) would be borne by firms and consumers in the interim. In short, the enforcement “updates” Khan recommends would reduce economic welfare—the opposite of what (one assumes) is intended.

For these reasons, the FTC should reject the chair’s overly expansive “updates.” It should instead make use of technologists, economists, and empirical research to unearth and combat economically harmful data practices. In doing so, the commission should pay attention to cost-benefit analysis and error-cost minimization. One can only hope that Khan’s fellow commissioners promptly endorse this eminently reasonable approach.   

We can expect a decision very soon from the High Court of Ireland on last summer’s Irish Data Protection Commission (“IDPC”) decision that placed serious impediments in the transfer data across the Atlantic. That decision, coupled with the July 2020 Court of Justice of the European Union (“CJEU”) decision to invalidate the Privacy Shield agreement between the European Union and the United States, has placed the future of transatlantic trade in jeopardy.

In 2015, the EU Schrems decision invalidated the previously longstanding “safe harbor” agreement between the EU and U.S. to ensure data transfers between the two zones complied with EU privacy requirements. The CJEU later invalidated the Privacy Shield agreement that was created in response to Schrems. In its decision, the court reasoned that U.S. foreign intelligence laws like FISA Section 702 and Executive Order 12333—which give the U.S. government broad latitude to surveil data and offer foreign persons few rights to challenge such surveillance—rendered U.S. firms unable to guarantee the privacy protections of EU citizens’ data.

The IDPC’s decision employed the same logic: if U.S. surveillance laws give the government unreviewable power to spy on foreign citizens’ data, then standard contractual clauses—an alternative mechanism for firms for transferring data—are incapable of satisfying the requirements of EU law.

The implications that flow from this are troubling, to say the least. In the worst case, laws like the CLOUD Act could leave a wide swath of U.S. firms practically incapable doing business in the EU. In the slightly less bad case, firms could be forced to completely localize their data and disrupt the economies of scale that flow from being able to process global data in a unified manner. In any case, the costs for compliance will be massive.

But even if the Irish court upholds the IDPC’s decision, there could still be a path forward for the U.S. and EU to preserve transatlantic digital trade. EU Commissioner for Justice Didier Reynders and U.S. Commerce Secretary Gina Raimondo recently issued a joint statement asserting they are “intensifying” negotiations to develop an enhanced successor to the EU-US Privacy Shield agreement. One can hope the talks are both fast and intense.

It seems unlikely that the Irish High Court would simply overturn the IDPC’s ruling. Instead, the IDCP’s decision will likely be upheld, possibly with recommended modifications. But even in that case, there is a process that buys the U.S. and EU a bit more time before any transatlantic trade involving consumer data grinds to a halt.

After considering replies to its draft decision, the IDPC would issue final recommendations on the extent of the data-transfer suspensions it deems necessary. It would then need to harmonize its recommendations with the other EU data-protection authorities. Theoretically, that could occur in a matter of days, but practically speaking, it would more likely occur over weeks or months. Assuming we get a decision from the Irish High Court before the end of April, it puts the likely deadline for suspension of transatlantic data transfers somewhere between June and September.

That’s not great, but it is not an impossible hurdle to overcome and there are temporary fixes the Biden administration could put in place. Two major concerns need to be addressed.

  1. U.S. data collection on EU citizens needs to be proportional to the necessities of intelligence gathering. Currently, the U.S. intelligence agencies have wide latitude to collect a large amount of data.
  2. The ombudsperson the Privacy Shield agreement created to be responsible for administering foreign citizen data requests was not sufficiently insulated from the political process, creating the need for adequate redress by EU citizens.

As Alex Joel recently noted, the Biden administration has ample powers to effect many of these changes through executive action. After all, EO 12333 was itself a creation of the executive branch. Other changes necessary to shape foreign surveillance to be in accord with EU requirements could likewise arise from the executive branch.

Nonetheless, Congress should not take that as a cue for complacency. It is possible that even if the Biden administration acts, the CJEU could find some or all of the measures insufficient. As the Biden team works to put changes in place through executive order, Congress should pursue surveillance reform through legislation.

Theoretically, the above fixes should be possible; there is not much partisan rancor about transatlantic trade as a general matter. But time is short, and this should be a top priority on policymakers’ radars.

(note: edited to clarify that the Irish High Court is not reviewing SCC’s directly and that the CLOUD Act would not impose legal barriers for firms, but practical ones).

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Christine S. Wilson (Commissioner of the U.S. Federal Trade Commission).[1] The views expressed here are the author’s and do not necessarily reflect those of the Federal Trade Commission or any other Commissioner.]  

I type these words while subject to a stay-at-home order issued by West Virginia Governor James C. Justice II. “To preserve public health and safety, and to ensure the healthcare system in West Virginia is capable of serving all citizens in need,” I am permitted to leave my home only for a limited and precisely enumerated set of reasons. Billions of citizens around the globe are now operating under similar shelter-in-place directives as governments grapple with how to stem the tide of infection, illness and death inflicted by the global Covid-19 pandemic. Indeed, the first response of many governments has been to impose severe limitations on physical movement to contain the spread of the novel coronavirus. The second response contemplated by many, and the one on which this blog post focuses, involves the extensive collection and analysis of data in connection with people’s movements and health. Some governments are using that data to conduct sophisticated contact tracing, while others are using the power of the state to enforce orders for quarantines and against gatherings.

The desire to use modern technology on a broad scale for the sake of public safety is not unique to this moment. Technology is intended to improve the quality of our lives, in part by enabling us to help ourselves and one another. For example, cell towers broadcast wireless emergency alerts to all mobile devices in the area to warn us of extreme weather and other threats to safety in our vicinity. One well-known type of broadcast is the Amber Alert, which enables community members to assist in recovering an abducted child by providing descriptions of the abductor, the abductee and the abductor’s vehicle. Citizens who spot individuals and vehicles that meet these descriptions can then provide leads to law enforcement authorities. A private nonprofit organization, the National Center for Missing and Exploited Children, coordinates with state and local public safety officials to send out Amber Alerts through privately owned wireless carriers.

The robust civil society and free market in the U.S. make partnerships between the private sector and government agencies commonplace. But some of these arrangements involve a much more extensive sharing of Americans’ personal information with law enforcement than the emergency alert system does.

For example, Amazon’s home security product Ring advertises itself not only as a way to see when a package has been left at your door, but also as a way to make communities safer by turning over video footage to local police departments. In 2018, the company’s pilot program in Newark, New Jersey, donated more than 500 devices to homeowners to install at their homes in two neighborhoods, with a big caveat. Ring recipients were encouraged to share video with police. According to Ring, home burglaries in those neighborhoods fell by more than 50% from April through July 2018 relative to the same time period a year earlier.

Yet members of Congress and privacy experts have raised concerns about these partnerships, which now number in the hundreds. After receiving Amazon’s response to his inquiry, Senator Edward Markey highlighted Ring’s failure to prevent police from sharing video footage with third parties and from keeping the video permanently, and Ring’s lack of precautions to ensure that users collect footage only of adults and of users’ own property. The House of Representatives Subcommittee on Economic and Consumer Policy continues to investigate Ring’s police partnerships and data policies. The Electronic Frontier Foundation has called Ring “a perfect storm of privacy threats,” while the UK surveillance camera commissioner has warned against “a very real power to understand, to surveil you in a way you’ve never been surveilled before.”

Ring demonstrates clearly that it is not new for potential breaches of privacy to be encouraged in the name of public safety; police departments urge citizens to use Ring and share the videos with police to fight crime. But emerging developments indicate that, in the fight against Covid-19, we can expect to see more and more private companies placed in the difficult position of becoming complicit in government overreach.

At least mobile phone users can opt out of receiving Amber Alerts, and residents can refuse to put Ring surveillance systems on their property. The Covid-19 pandemic has made some other technological intrusions effectively impossible to refuse. For example, online proctors who monitor students over webcams to ensure they do not cheat on exams taken at home were once something that students could choose to accept if they did not want to take an exam where and when they could be proctored face to face. With public schools and universities across the U.S. closed for the rest of the semester, students who refuse to give private online proctors access to their webcams – and, consequently, the ability to view their surroundings – cannot take exams at all.

Existing technology and data practices already have made the Federal Trade Commission sensitive to potential consumer privacy and data security abuses. For decades, this independent, bipartisan agency has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. It brought its first privacy and data security cases nearly 20 years ago, while I was Chief of Staff to then-Chairman Timothy J. Muris. The FTC took on Eli Lilly for disclosing the e-mail addresses of 669 subscribers to its Prozac reminder service – many of whom were government officials, and at a time of greater stigma for mental health issues – and Microsoft for (among other things) falsely claiming that its Passport website sign-in service did not collect any personally identifiable information other than that described in its privacy policy.

The privacy and data security practices of healthcare and software companies are likely to impact billions of people during the current coronavirus pandemic. The U.S. already has many laws on the books that are relevant to practices in these areas. One notable example is the Health Insurance Portability and Accountability Act, which set national standards for the protection of individually identifiable health information by health plans, health care clearinghouses and health care providers who accept non-cash payments. While the FTC does not enforce HIPAA, it does enforce the Health Breach Notification Rule, as well as the provisions in the FTC Act used to challenge the privacy missteps of Eli Lilly and many other companies.

But technological developments have created gaps in HIPAA enforcement. For example, HIPAA applies to doctors’ offices, hospitals and insurance companies, but it may not apply to wearables, smartphone apps or websites. Yet sensitive medical information is now commonly stored in places other than health care practitioners’ offices.  Your phone and watch now collect information about your blood sugar, exercise habits, fertility and heart health. 

Observers have pointed to these emerging gaps in coverage as evidence of the growing need for federal privacy legislation. I, too, have called on the U.S. Congress to enact comprehensive federal privacy legislation – not only to address these emerging gaps, but for two other reasons.  First, consumers need clarity regarding the types of data collected from them, and how those data are used and shared. I believe consumers can make informed decisions about which goods and services to patronize when they have the information they need to evaluate the costs and benefits of using those goods. Second, businesses need predictability and certainty regarding the rules of the road, given the emerging patchwork of regimes both at home and abroad.

Rules of the road regarding privacy practices will prove particularly instructive during this global pandemic, as governments lean on the private sector for data on the grounds that the collection and analysis of data can help avert (or at least diminish to some extent) a public health catastrophe. With legal lines in place, companies would be better equipped to determine when they are being asked to cross the line for the public good, and whether they should require a subpoena or inform customers before turning over data. It is regrettable that Congress has been unable to enact federal privacy legislation to guide this discussion.

Understandably, Congress does not have privacy at the top of its agenda at the moment, as the U.S. faces a public health crisis. As I write, more than 579,000 Americans have been diagnosed with Covid-19, and more than 22,000 have perished. Sadly, those numbers will only increase. And the U.S. is not alone in confronting this crisis: governments globally have confronted more than 1.77 million cases and more than 111,000 deaths. For a short time, health and safety issues may take precedence over privacy protections. But some of the initiatives to combat the coronavirus pandemic are worrisome. We are learning more every day about how governments are responding in a rapidly developing situation; what I describe in the next section constitutes merely the tip of the iceberg. These initiatives are worth highlighting here, as are potential safeguards for privacy and civil liberties that societies around the world would be wise to embrace.

Some observers view public/private partnerships based on an extensive use of technology and data as key to fighting the spread of Covid-19. For example, Professor Jane Bambauer calls for contact tracing and alerts “to be done in an automated way with the help of mobile service providers’ geolocation data.” She argues that privacy is merely “an instrumental right” that “is meant to achieve certain social goals in fairness, safety and autonomy. It is not an end in itself.” Given the “more vital” interests in health and the liberty to leave one’s house, Bambauer sees “a moral imperative” for the private sector “to ignore even express lack of consent” by an individual to the sharing of information about him.

This proposition troubles me because the extensive data sharing that has been proposed in some countries, and that is already occurring in many others, is not mundane. In the name of advertising and product improvements, private companies have been hoovering up personal data for years. What this pandemic lays bare, though, is that while this trove of information was collected under the guise of cataloguing your coffee preferences and transportation habits, it can be reprocessed in an instant to restrict your movements, impinge on your freedom of association, and silence your freedom of speech. Bambauer is calling for detailed information about an individual’s every movement to be shared with the government when, in the United States under normal circumstances, a warrant would be required to access this information.

Indeed, with our mobile devices acting as the “invisible policeman” described by Justice William O. Douglas in Berger v. New York, we may face “a bald invasion of privacy, far worse than the general warrants prohibited by the Fourth Amendment.” Backward-looking searches and data hoards pose new questions of what constitutes a “reasonable” search. The stakes are high – both here and abroad, citizens are being asked to allow warrantless searches by the government on an astronomical scale, all in the name of public health.  

Abroad

The first country to confront the coronavirus was China. The World Health Organization has touted the measures taken by China as “the only measures that are currently proven to interrupt or minimize transmission chains in humans.” Among these measures are the “rigorous tracking and quarantine of close contacts,” as well as “the use of big data and artificial intelligence (AI) to strengthen contact tracing and the management of priority populations.” An ambassador for China has said his government “optimized the protocol of case discovery and management in multiple ways like backtracking the cell phone positioning.” Much as the Communist Party’s control over China enabled it to suppress early reports of a novel coronavirus, this regime vigorously ensured its people’s compliance with the “stark” containment measures described by the World Health Organization.

Before the Covid-19 pandemic, Hong Kong already had been testing the use of “smart wristbands” to track the movements of prisoners. The Special Administrative Region now monitors people quarantined inside their homes by requiring them to wear wristbands that send information to the quarantined individuals’ smartphones and alert the Department of Health and Police if people leave their homes, break their wristbands or disconnect them from their smartphones. When first announced in early February, the wristbands were required only for people who had been to Wuhan in the past 14 days, but the program rapidly expanded to encompass every person entering Hong Kong. The government denied any privacy concerns about the electronic wristbands, saying the Privacy Commissioner for Personal Data had been consulted about the technology and agreed it could be used to ensure that quarantined individuals remain at home.

Elsewhere in Asia, Taiwan’s Chunghwa Telecom has developed a system that the local CDC calls an “electronic fence.” Specifically, the government obtains the SIM card identifiers for the mobile devices of quarantined individuals and passes those identifiers to mobile network operators, which use phone signals to their cell towers to alert public health and law enforcement agencies when the phone of a quarantined individual leaves a certain geographic range. In response to privacy concerns, the National Communications Commission said the system was authorized by special laws to prevent the coronavirus, and that it “does not violate personal data or privacy protection.” In Singapore, travelers and others issued Stay-Home Notices to remain in their residency 24 hours a day for 14 days must respond within an hour if contacted by government agencies by phone, text message or WhatsApp. And to assist with contact tracing, the government has encouraged everyone in the country to download TraceTogether, an app that uses Bluetooth to identify other nearby phones with the app and tracks when phones are in close proximity.

Israel’s Ministry of Health has launched an app for mobile devices called HaMagen (the shield) to prevent the spread of coronavirus by identifying contacts between diagnosed patients and people who came into contact with them in the 14 days prior to diagnosis. In March, the prime minister’s cabinet initially bypassed the legislative body to approve emergency regulations for obtaining without a warrant the cellphone location data and additional personal information of those diagnosed with or suspected of coronavirus infection. The government will send text messages to people who came into contact with potentially infected individuals, and will monitor the potentially infected person’s compliance with quarantine. The Ministry of Health will not hold this information; instead, it can make data requests to the police and Shin Bet, the Israel Security Agency. The police will enforce quarantine measures and Shin Bet will track down those who came into contact with the potentially infected.

Multiple Eastern European nations with constitutional protections for citizens’ rights of movement and privacy have superseded them by declaring a state of emergency. For example, in Hungary the declaration of a “state of danger” has enabled Prime Minister Viktor Orbán’s government to engage in “extraordinary emergency measures” without parliamentary consent.  His ministers have cited the possibility that coronavirus will prevent a gathering of a sufficient quorum of members of Parliament as making it necessary for the government to be able to act in the absence of legislative approval.

Member States of the European Union must protect personal data pursuant to the General Data Protection Regulation, and communications data, such as mobile location, pursuant to the ePrivacy Directive. The chair of the European Data Protection Board has observed that the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security. But if those measures allow for the processing of non-anonymized location data from mobile devices, individuals must have safeguards such as a right to a judicial remedy. “Invasive measures, such as the ‘tracking’ of individuals (i.e. processing of historical non-anonymized location data) could be considered proportional under exceptional circumstances and depending on the concrete modalities of the processing.” The EDPB has announced it will prioritize guidance on these issues.

EU Member States are already implementing such public security measures. For example, the government of Poland has by statute required everyone under a quarantine order due to suspected infection to download the “Home Quarantine” smartphone app. Those who do not install and use the app are subject to a fine. The app verifies users’ compliance with quarantine through selfies and GPS data. Users’ personal data will be administered by the Minister of Digitization, who has appointed a data protection officer. Each user’s identification, name, telephone number, quarantine location and quarantine end date can be shared with police and other government agencies. After two weeks, if the user does not report symptoms of Covid-19, the account will be deactivated — but the data will be stored for six years. The Ministry of Digitization claims that it must store the data for six years in case users pursue claims against the government. However, local privacy expert and Panoptykon Foundation cofounder Katarzyna Szymielewicz has questioned this rationale.

Even other countries that are part of the Anglo-American legal tradition are ramping up their use of data and working with the private sector to do so. The UK’s National Health Service is developing a data store that will include online/call center data from NHS Digital and Covid-19 test result data from the public health agency. While the NHS is working with private partner organizations and companies including Microsoft, Palantir Technologies, Amazon Web Services and Google, it has promised to keep all the data under its control, and to require those partners to destroy or return the data “once the public health emergency situation has ended.” The NHS also has committed to meet the requirements of data protection legislation by ensuring that individuals cannot be re-identified from the data in the data store.

Notably, each of the companies partnering with the NHS at one time or another has been subjected to scrutiny for its privacy practices. Some observers have noted that tech companies, which have been roundly criticized for a variety of reasons in recent years, may seek to use this pandemic for “reputation laundering.” As one observer cautioned: “Reputations matter, and there’s no reason the government or citizens should cast bad reputations aside when choosing who to work with or what to share” during this public health crisis.

At home

In the U.S., the federal government last enforced large-scale isolation and quarantine measures during the influenza (“Spanish Flu”) pandemic a century ago. But the Centers for Disease Control and Prevention track diseases on a daily basis by receiving case notifications from every state. The states mandate that healthcare providers and laboratories report certain diseases to the local public health authorities using personal identifiers. In other words, if you test positive for coronavirus, the government will know. Every state has laws authorizing quarantine and isolation, usually through the state’s health authority, while the CDC has authority through the federal Public Health Service Act and a series of presidential executive orders to exercise quarantine and isolation powers for specific diseases, including severe acute respiratory syndromes (a category into which the novel coronavirus falls).

Now local governments are issuing orders that empower law enforcement to fine and jail Americans for failing to practice social distancing. State and local governments have begun arresting and charging people who violate orders against congregating in groups. Rhode Island is requiring every non-resident who enters the state to be quarantined for two weeks, with police checks at the state’s transportation hubs and borders.

How governments discover violations of quarantine and social distancing orders will raise privacy concerns. Police have long been able to enforce based on direct observation of violations. But if law enforcement authorities identify violations of such orders based on data collection rather than direct observation, the Fourth Amendment may be implicated. In Jones and Carpenter, the Supreme Court has limited the warrantless tracking of Americans through GPS devices placed on their cars and through cellphone data. But building on the longstanding practice of contact tracing in fighting infectious diseases such as tuberculosis, GPS data has proven helpful in fighting the spread of Covid-19. This same data, though, also could be used to piece together evidence of violations of stay-at-home orders. As Chief Justice John Roberts wrote in Carpenter, “With access to [cell-site location information], the government can now travel back in time to retrace a person’s whereabouts… Whoever the suspect turns out to be, he has effectively been tailed every moment of every day for five years.”

The Fourth Amendment protects American citizens from government action, but the “reasonable expectation of privacy” test applied in Fourth Amendment cases connects the arenas of government action and commercial data collection. As Professor Paul Ohm of the Georgetown University Law Center notes, “the dramatic expansion of technologically-fueled corporate surveillance of our private lives automatically expands police surveillance too, thanks to the way the Supreme Court has construed the reasonable expectation of privacy test and the third-party doctrine.”

For example, the COVID-19 Mobility Data Network – infectious disease epidemiologists working with Facebook, Camber Systems and Cubiq – uses mobile device data to inform state and local governments about whether social distancing orders are effective. The tech companies give the researchers aggregated data sets; the researchers give daily situation reports to departments of health, but say they do not share the underlying data sets with governments. The researchers have justified this model based on users of the private companies’ apps having consented to the collection and sharing of data.

However, the assumption that consumers have given informed consent to the collection of their data (particularly for the purpose of monitoring their compliance with social isolation measures during a pandemic) is undermined by studies showing the average consumer does not understand all the different types of data that are collected and how their information is analyzed and shared with third parties – including governments. Technology and telecommunications companies have neither asked me to opt into tracking for public health nor made clear how they are partnering with federal, state and local governments. This practice highlights that data will be divulged in ways consumers cannot imagine – because no one assumed a pandemic when agreeing to a company’s privacy policy. This information asymmetry is part of why we need federal privacy legislation.

On Friday afternoon, Apple and Google announced their opt-in Covid-19 contact tracing technology. The owners of the two most common mobile phone operating systems in the U.S. said that in May they would release application programming interfaces that enable interoperability between iOS and Android devices using official contact tracing apps from public health authorities. At an unspecified date, Bluetooth-based contact tracing will be built directly into the operating systems. “Privacy, transparency, and consent are of utmost importance in this effort,” the companies said in their press release.  

At this early stage, we do not yet know exactly how the proposed Google/Apple contact tracing system will operate. It sounds similar to Singapore’s TraceTogether, which is already available in the iOS and Android mobile app stores (it has a 3.3 out of 5 average rating in the former and a 4.0 out of 5 in the latter). TraceTogether is also described as a voluntary, Bluetooth-based system that avoids GPS location data, does not upload information without the user’s consent, and uses changing, encrypted identifiers to maintain user anonymity. Perhaps the most striking difference, at least to a non-technical observer, is that TraceTogether was developed and is run by the Singaporean government, which has been a point of concern for some observers. The U.S. version – like finding abducted children through Amber Alerts and fighting crime via Amazon Ring – will be a partnership between the public and private sectors.     

Recommendations

The global pandemic we now face is driving data usage in ways not contemplated by consumers. Entities in the private and public sector are confronting new and complex choices about data collection, usage and sharing. Organizations with Chief Privacy Officers, Chief Information Security Officers, and other personnel tasked with managing privacy programs are, relatively speaking, well-equipped to address these issues. Despite the extraordinary circumstances, senior management should continue to rely on the expertise and sound counsel of their CPOs and CISOs, who should continue to make decisions based on their established privacy and data security programs. Although developments are unfolding at warp speed, it is important – arguably now, more than ever – to be intentional about privacy decisions.

For organizations that lack experience with privacy and data security programs (and individuals tasked with oversight for these areas), now is a great time to pause, do some research and exercise care. It is essential to think about the longer-term ramifications of choices made about data collection, use and sharing during the pandemic. The FTC offers easily accessible resources, including Protecting Personal Information: A Guide for Business, Start with Security: A Guide for Business, and Stick with Security: A Business Blog Series. While the Gramm-Leach-Bliley Act (GLB) applies only to financial institutions, the FTC’s GLB compliance blog outlines some data security best practices that apply more broadly. The National Institute for Standards and Technology (NIST) also offers security and privacy resources, including a privacy framework to help organizations identify and manage privacy risks. Private organizations such as the Center for Information Policy Leadership, the International Association of Privacy Professionals and the App Association also offer helpful resources, as do trade associations. While it may seem like a suboptimal time to take a step back and focus on these strategic issues, remember that privacy and data security missteps can cause irrevocable harm. Counterintuitively, now is actually the best time to be intentional about choices in these areas.

Best practices like accountability, risk assessment and risk management will be key to navigating today’s challenges. Companies should take the time to assess and document the new and/or expanded risks from the data collection, use and sharing of personal information. It is appropriate for these risk assessments to incorporate potential benefits and harms not only to the individual and the company, but for society as a whole. Upfront assessments can help companies establish controls and incentives to facilitate responsible behavior, as well as help organizations demonstrate that they are fully aware of the impact of their choices (risk assessment) and in control of their impact on people and programs (risk mitigation). Written assessments can also facilitate transparency with stakeholders, raise awareness internally about policy choices and assist companies with ongoing monitoring and enforcement. Moreover, these assessments will facilitate a return to “normal” data practices when the crisis has passed.  

In a similar vein, companies must engage in comprehensive vendor management with respect to the entities that are proposing to use and analyze their data. In addition to vetting proposed data recipients thoroughly, companies must be selective concerning the categories of information shared. The benefits of the proposed research must be balanced against individual protections, and companies should share only those data necessary to achieve the stated goals. To the extent feasible, data should be shared in de-identified and aggregated formats and data recipients should be subject to contractual obligations prohibiting them from re-identification. Moreover, companies must have policies in place to ensure compliance with research contracts, including data deletion obligations and prohibitions on data re-identification, where appropriate. Finally, companies must implement mechanisms to monitor third party compliance with contractual obligations.

Similar principles of necessity and proportionality should guide governments as they make demands or requests for information from the private sector. Governments must recognize the weight with which they speak during this crisis and carefully balance data collection and usage with civil liberties. In addition, governments also have special obligations to ensure that any data collection done by them or at their behest is driven by the science of Covid-19; to be transparent with citizens about the use of data; and to provide due process for those who wish to challenge limitations on their rights. Finally, government actors should apply good data hygiene, including regularly reassessing the breadth of their data collection initiatives and incorporating data retention and deletion policies. 

In theory, government’s role could be reduced as market-driven responses emerge. For example, assuming the existence of universally accessible daily coronavirus testing with accurate results even during the incubation period, Hal Singer’s proposal for self-certification of non-infection among private actors is intriguing. Thom Lambert identified the inability to know who is infected as a “lemon problem;” Singer seeks a way for strangers to verify each other’s “quality” in the form of non-infection.

Whatever solutions we may accept in a pandemic, it is imperative to monitor the coronavirus situation as it improves, to know when to lift the more dire measures. Former Food and Drug Administration Commissioner Scott Gottlieb and other observers have called for maintaining surveillance because of concerns about a resurgence of the virus later this year. For any measures that conflict with Americans’ constitutional rights to privacy and freedom of movement, there should be metrics set in advance for the conditions that will indicate when such measures are no longer justified. In the absence of pre-determined metrics, governments may feel the same temptation as Hungary’s prime minister to keep renewing a “state of danger” that overrides citizens’ rights. As Slovak lawmaker Tomas Valasek has said, “It doesn’t just take the despots and the illiberals of this world, like Orbán, to wreak damage.” But privacy is not merely instrumental to other interests, and we do not have to sacrifice our right to it indefinitely in exchange for safety.

I recognize that halting the spread of the virus will require extensive and sustained effort, and I credit many governments with good intentions in attempting to save the lives of their citizens. But I refuse to accept that we must sacrifice privacy to reopen the economy. It seems a false choice to say that I must sacrifice my Constitutional rights to privacy, freedom of association and free exercise of religion for another’s freedom of movement. Society should demand that equity, fairness and autonomy be respected in data uses, even in a pandemic. To quote Valasek again: “We need to make sure that we don’t go a single inch further than absolutely necessary in curtailing civil liberties in the name of fighting for public health.” History has taught us repeatedly that sweeping security powers granted to governments during an emergency persist long after the crisis has abated. To resist the gathering momentum toward this outcome, I will continue to emphasize the FTC’s learning on appropriate data collection and use. But my remit as an FTC Commissioner is even broader – when I was sworn in on Sept. 26, 2018, I took an oath to “support and defend the Constitution of the United States” – and so I shall.


[1] Many thanks to my Attorney Advisors Pallavi Guniganti and Nina Frant for their invaluable assistance in preparing this article.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.