Archives For privacy

[This post is an entry in Truth on the Market’s FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The Federal Trade Commission’s (FTC) Aug. 22 Advance Notice of Proposed Rulemaking on Commercial Surveillance and Data Security (ANPRM) is breathtaking in its scope. For an overview summary, see this Aug. 11 FTC press release.

In their dissenting statements opposing ANPRM’s release, Commissioners Noah Phillips and Christine Wilson expertly lay bare the notice’s serious deficiencies. Phillips’ dissent stresses that the ANPRM illegitimately arrogates to the FTC legislative power that properly belongs to Congress:

[The [A]NPRM] recast[s] the Commission as a legislature, with virtually limitless rulemaking authority where personal data are concerned. It contemplates banning or regulating conduct the Commission has never once identified as unfair or deceptive. At the same time, the ANPR virtually ignores the privacy and security concerns that have animated our [FTC] enforcement regime for decades. … [As such, the ANPRM] is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate. That’s not “democratizing” the FTC or using all “the tools in the FTC’s toolbox.” It’s a naked power grab.

Wilson’s complementary dissent critically notes that the 2021 changes to FTC rules of practice governing consumer-protection rulemaking decrease opportunities for public input and vest significant authority solely with the FTC chair. She also echoed Phillips’ overarching concern with FTC overreach (footnote citations omitted):

Many practices discussed in this ANPRM are presented as clearly deceptive or unfair despite the fact that they stretch far beyond practices with which we are familiar, given our extensive law enforcement experience. Indeed, the ANPRM wanders far afield of areas for which we have clear evidence of a widespread pattern of unfair or deceptive practices. … [R]egulatory and enforcement overreach increasingly has drawn sharp criticism from courts. Recent Supreme Court decisions indicate FTC rulemaking overreach likely will not fare well when subjected to judicial review.

Phillips and Wilson’s warnings are fully warranted. The ANPRM contemplates a possible Magnuson-Moss rulemaking pursuant to Section 18 of the FTC Act,[1] which authorizes the commission to promulgate rules dealing with “unfair or deceptive acts or practices.” The questions that the ANPRM highlights center primarily on concerns of unfairness.[2] Any unfairness-related rulemaking provisions eventually adopted by the commission will have to satisfy a strict statutory cost-benefit test that defines “unfair” acts, found in Section 5(n) of the FTC Act. As explained below, the FTC will be hard-pressed to justify addressing most of the ANPRM’s concerns in Section 5(n) cost-benefit terms.

Discussion

The requirements imposed by Section 5(n) cost-benefit analysis

Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority … to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In other words, a practice may be condemned as unfair only if it causes or is likely to cause “(1) substantial injury to consumers (2) which is not reasonably avoidable by consumers themselves and (3) not outweighed by countervailing benefits to consumers or to competition.”

This is a demanding standard. (For scholarly analyses of the standard’s legal and economic implications authored by former top FTC officials, see here, here, and here.)

First, the FTC must demonstrate that a practice imposes a great deal of harm on consumers, which they could not readily have avoided. This requires detailed analysis of the actual effects of a particular practice, not mere theoretical musings about possible harms that may (or may not) flow from such practice. Actual effects analysis, of course, must be based on empiricism: consideration of hard facts.

Second, assuming that this formidable hurdle is overcome, the FTC must then acknowledge and weigh countervailing welfare benefits that might flow from such a practice. In addition to direct consumer-welfare benefits, other benefits include “benefits to competition.” Those may include business efficiencies that reduce a firm’s costs, because such efficiencies are a driver of vigorous competition and, thus, of long-term consumer welfare. As the Organisation for Economic Co-operation and Development has explained (see OECD Background Note on Efficiencies, 2012, at 14), dynamic and transactional business efficiencies are particularly important in driving welfare enhancement.

In sum, under Section 5(n), the FTC must show actual, fact-based, substantial harm to consumers that they could not have escaped, acting reasonably. The commission must also demonstrate that such harm is not outweighed by consumer and (procompetitive) business-efficiency benefits. What’s more, Section 5(n) makes clear that the FTC cannot “pull a rabbit out of a hat” and interject other “public policy” considerations as key factors in the rulemaking  calculus (“[s]uch [other] public policy considerations may not serve as a primary basis for … [a] determination [of unfairness]”).

It ineluctably follows as a matter of law that a Section 18 FTC rulemaking sounding in unfairness must be based on hard empirical cost-benefit assessments, which require data grubbing and detailed evidence-based economic analysis. Mere anecdotal stories of theoretical harm to some consumers that is alleged to have resulted from a practice in certain instances will not suffice.

As such, if an unfairness-based FTC rulemaking fails to adhere to the cost-benefit framework of Section 5(n), it inevitably will be struck down by the courts as beyond the FTC’s statutory authority. This conclusion is buttressed by the tenor of the Supreme Court’s unanimous 2021 opinion in AMG Capital v. FTC, which rejected the FTC’s claim that its statutory injunctive authority included the ability to obtain monetary relief for harmed consumers (see my discussion of this case here).

The ANPRM and Section 5(n)

Regrettably, the tone of the questions posed in the ANPRM indicates a lack of consideration for the constraints imposed by Section 5(n). Accordingly, any future rulemaking that sought to establish “remedies” for many of the theorized abuses found in the ANPRM would stand very little chance of being upheld in litigation.

The Aug. 11 FTC press release cited previously addresses several broad topical sources of harms: harms to consumers; harms to children; regulations; automated systems; discrimination; consumer consent; notice, transparency, and disclosure; remedies; and obsolescence. These categories are chock full of questions that imply the FTC may consider restrictions on business conduct that go far beyond the scope of the commission’s authority under Section 5(n). (The questions are notably silent about the potential consumer benefits and procompetitive efficiencies that may arise from the business practices called here into question.)

A few of the many questions set forth under just four of these topical listings (harms to consumers, harms to children, regulations, and discrimination) are highlighted below, to provide a flavor of the statutory overreach that categorizes all aspects of the ANPRM. Many other examples could be cited. (Phillips’ dissenting statement provides a cogent and critical evaluation of ANPRM questions that embody such overreach.) Furthermore, although there is a short discussion of “costs and benefits” in the ANPRM press release, it is wholly inadequate to the task.

Under the category “harms to consumers,” the ANPRM press release focuses on harm from “lax data security or surveillance practices.” It asks whether FTC enforcement has “adequately addressed indirect pecuniary harms, including potential physical harms, psychological harms, reputational injuries, and unwanted intrusions.” The press release suggests that a rule might consider addressing harms to “different kinds of consumers (e.g., young people, workers, franchisees, small businesses, women, victims of stalking or domestic violence, racial minorities, the elderly) in different sectors (e.g., health, finance, employment) or in different segments or ‘stacks’ of the internet economy.”

These laundry lists invite, at best, anecdotal public responses alleging examples of perceived “harm” falling into the specified categories. Little or no light is likely to be shed on the measurement of such harm, nor on the potential beneficial effects to some consumers from the practices complained of (for example, better targeted ads benefiting certain consumers). As such, a sound Section 5(n) assessment would be infeasible.

Under “harms to children,” the press release suggests possibly extending the limitations of the FTC-administered Children’s Online Privacy Protection Act (COPPA) to older teenagers, thereby in effect rewriting COPPA and usurping the role of Congress (a clear statutory overreach). The press release also asks “[s]hould new rules set out clear limits on personalized advertising to children and teenagers irrespective of parental consent?” It is hard (if not impossible) to understand how this form of overreach, which would displace the supervisory rights of parents (thereby imposing impossible-to-measure harms on them), could be shoe-horned into a defensible Section 5(n) cost-benefit assessment.

Under “regulations,” the press release asks whether “new rules [should] require businesses to implement administrative, technical, and physical data security measures, including encryption techniques, to protect against risks to the security, confidentiality, or integrity of covered data?” Such new regulatory strictures (whose benefits to some consumers appear speculative) would interfere significantly in internal business processes. Specifically, they could substantially diminish the efficiency of business-security measures, diminish business incentives to innovate (for example, in encryption), and reduce dynamic competition among businesses.

Consumers also would be harmed by a related slowdown in innovation. Those costs undoubtedly would be high but hard, if not impossible, to measure. The FTC also asks whether a rule should limit “companies’ collection, use, and retention of consumer data.” This requirement, which would seemingly bypass consumers’ decisions to make their data available, would interfere with companies’ ability to use such data to improve business offerings and thereby enhance consumers’ experiences. Justifying new requirements such as these under Section 5(n) would be well-nigh impossible.

The category “discrimination” is especially problematic. In addressing “algorithmic discrimination,” the ANPRM press release asks whether the FTC should “consider new trade regulation rules that bar or somehow limit the deployment of any system that produces discrimination, irrespective of the data or processes on which those outcomes are based.” In addition, the press release asks “if the Commission [should] consider harms to other underserved groups that current law does not recognize as protected from discrimination (e.g., unhoused people or residents of rural communities)?”

The FTC cites no statutory warrant for the authority to combat such forms of “discrimination.” It is not a civil-rights agency. It clearly is not authorized to issue anti-discrimination rules dealing with “groups that current law does not recognize as protected from discrimination.” Any such rules, if issued, would be summarily struck down in no uncertain terms by the judiciary, even without regard to Section 5(n).

In addition, given the fact that “economic discrimination” often is efficient (and procompetitive) and may be beneficial to consumer welfare (see, for example, here), more limited economic anti-discrimination rules almost certainly would not pass muster under the Section 5(n) cost-benefit framework.     

Finally, while the ANPRM press release does contain a very short section entitled “costs and benefits,” that section lacks any specific reference to the required Section 5(n) evaluation framework. Phillips’ dissent points out that the ANPRM:

…simply fail[s] to provide the detail necessary for commenters to prepare constructive responses” on cost-benefit analysis. He stresses that the broad nature of requests for commenters’ view on costs and benefits renders the inquiry “not conducive to stakeholders submitting data and analysis that can be compared and considered in the context of a specific rule. … Without specific questions about [the costs and benefits of] business practices and potential regulations, the Commission cannot hope for tailored responses providing a full picture of particular practices.

In other words, the ANPRM does not provide the guidance needed to prompt the sorts of responses that might assist the FTC in carrying out an adequate Section 5(n) cost-benefit analysis.

Conclusion

The FTC would face almost certain defeat in court if it promulgated a broad rule addressing many of the perceived unfairness-based “ills” alluded to in the ANPRM. Moreover, although its requirements would (I believe) not come into effect, such a rule nevertheless would impose major economic costs on society.

Prior to final judicial resolution of its status, the rule would disincentivize businesses from engaging in a variety of data-related practices that enhance business efficiency and benefit many consumers. Furthermore, the FTC resources devoted to developing and defending the rule would not be applied to alternative welfare-enhancing FTC activities—a substantial opportunity cost.

The FTC should take heed of these realities and opt not to carry out a rulemaking based on the ANPRM. It should instead devote its scarce consumer protection resources to prosecuting hard core consumer fraud and deception—and, perhaps, to launching empirical studies into the economic-welfare effects of data security and commercial surveillance practices. Such studies, if carried out, should focus on dispassionate economic analysis and avoid policy preconceptions. (For example, studies involving digital platforms should take note of the existing economic literature, such as a paper indicating that digital platforms have generated enormous consumer-welfare benefits not accounted for in gross domestic product.)

One can only hope that a majority of FTC commissioners will apply common sense and realize that far-flung rulemaking exercises lacking in statutory support are bad for the rule of law, bad for the commission’s reputation, bad for the economy, and bad for American consumers.


[1] The FTC states specifically that it “is issuing this ANPR[M] pursuant to Section 18 of the Federal Trade Commission Act”.

[2] Deceptive practices that might be addressed in a Section 18 trade regulation rule would be subject to the “FTC Policy Statement on Deception,” which states that “the Commission will find deception if there is a representation, omission or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.” A court reviewing an FTC Section 18 rule focused on “deceptive acts or practices” undoubtedly would consult this Statement, although it is not clear, in light of recent jurisprudential trends, that the court would defer to the Statement’s analysis in rendering an opinion. In any event, questions of deception, which focus on acts or practices that mislead consumers, would in all likelihood have little relevance to the evaluation of any rule that might be promulgated in light of the ANPRM.    

[TOTM: This guest post from Svetlana S. Gans and Natalie Hausknecht of Gibson Dunn is part of the Truth on the Market FTC UMC Symposium. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.]

The Federal Trade Commission (FTC) launched one of the most ambitious rulemakings in agency history Aug. 11, with its 3-2 vote to initiate Advance Notice of Proposed Rulemaking (ANPRM) on commercial surveillance and data security. The divided vote, which broke down on partisan lines, stands in stark contrast to recent bipartisan efforts on Capitol Hill, particularly on the comprehensive American Data Privacy and Protection Act (ADPPA).  

Although the rulemaking purports to pursue a new “privacy and data security” regime, it targets far more than consumer privacy. The ANPRM lays out a sweeping project to rethink the regulatory landscape governing nearly every facet of the U.S. internet economy, from advertising to anti-discrimination law, and even to labor relations. Any entity that uses the internet (even for internal purposes) is likely to be affected by this latest FTC action, and public participation in the proposed rulemaking will be important to ensure the agency gets it right.

Summary of the ANPRM  

The vague scope of the FTC’s latest ANPRM begins at its title: “Commercial Surveillance and Data Security” Rulemaking. The announcement states the FTC intends to explore rules “cracking down” on the “business of collecting, analyzing, and profiting from information about people.” The ANPRM then defines the scope of “commercial surveillance” to include virtually any data activity. For example, the ANPRM explains that it includes practices used “to set prices, curate newsfeeds, serve advertisements, and conduct research on people’s behavior, among other things.” The ANPRM also goes on to say that it is concerned about practices “outside of the retail consumer setting” that the agency traditionally regulates. Indeed, the ANPRM defines “consumer” to include “businesses and workers, not just individuals who buy or exchange data for retail goods and services.”

Unlike the bipartisan ADPPA, the ANPRM also takes aim at the “consent” model that the FTC has long advocated to ensure consumers make informed choices about their data online. It claims that “consumers may become resigned to” data practices and “have little to no actual control over what happens to their information.” It also suggests that consumers “do not generally understand” data practices, such that their permission could be “meaningful”—making express consumer consent to data practices “irrelevant.”

The ANPRM further lists a disparate set of additional FTC concerns, from “pernicious dark pattern practices” to “lax data security practices” to “sophisticated digital advertising systems” to “stalking apps,” “cyber bullying, cyberstalking, and the distribution of child sexual abuse material,” and the use of “social media” among “kids and teens.” It “finally” wraps up with a reference to “growing reliance on automated systems” that may create “new forms and mechanisms for discrimination” in areas like housing, employment, and healthcare. The issue the agency expresses about these automated systems is with apparent “disparate outcomes” “even when automated systems consider only unprotected consumer traits.”

Having set out these concerns, the ANPRM seeks to justify a new rulemaking via a list of what it describes as “decades” of “consumer data privacy and security” enforcement actions. The rulemaking then requests that the public answer 95 questions, covering many different legal and factual issues. For example, the agency requests the public weigh in on the practices “companies use to surveil consumers,” intangible and unmeasurable “harms” created by such practices, the most harmful practices affecting children and teens, techniques that “manipulate consumers into prolonging online activity,” how the commission should balance costs and benefits from any regulation, biometric data practices, algorithmic errors and disparate impacts, the viability of consumer consent, the opacity of “consumer surveillance practices,” and even potential remedies the agency should consider.  

Commissioner Statements in Support of the ANPR

Every Democratic commissioner issued a separate supporting statement. Chair Lina Khan’s statement justified the rulemaking grounds that the FTC is the “de facto law enforcer in this domain.” She also doubled-down on the decision to address not only consumer privacy, but issues affecting all “opportunities in our economy and society, as well as core civil liberties and civil rights” and described being “especially eager to build a record” related to: the limits of “notice and consent” frameworks, as opposed to withdrawing permission for data collection “in the first place”; how to navigate “information asymmetries” with companies; how to address certain “business models” “premised on” persistent tracking; discrimination in automated processes; and workplace surveillance.   

Commissioner Rebecca Kelly Slaughter’s longer statement more explicitly attacked the agency’s “notice-and-consent regime” as having “failed to protect users.” She expressed hope that the new rules would take on biometric or location tracking, algorithmic decision-making, and lax data security practices as “long overdue.” Commission Slaughter further brushed aside concerns that the rulemaking was inappropriate while Congress considered comprehensive privacy legislation, asserting that the magnitude of the rulemaking was a reason to do it—not shy away. She also expressed interest in data-minimization specifications, discriminatory algorithms, and kids and teens issues.

Commissioner Alvaro Bedoya’s short statement likewise expressed support for acting. However, he noted the public comment period would help the agency “discern whether and how to proceed.” Like his colleagues, he identified his particular interest in “emerging discrimination issues”: the mental health of kids and teens; the protection of non-English speaking communities; and biometric data. On the pending privacy legislation, he noted that:

[ADPPA] is the strongest privacy bill that has ever been this close to passing. I hope it does pass. I hope it passes soon…. This ANPRM will not interfere with that effort. I want to be clear: Should the ADPPA pass, I will not vote for any rule that overlaps with it.

Commissioner Statements Opposed to the ANPRM

Both Republican commissioners published dissents. Commissioner Christine S. Wilson’s urged deference to Congress as it considers a comprehensive privacy law. Yet she also expressed broader concern about the FTC’s recent changes to its Section 18 rulemaking process that “decrease opportunities for public input and vest significant authority for the rulemaking proceedings solely with the Chair” and the unjustified targeting of practices not subject to prior enforcement action. Notably, Commissioner Wilson also worried the rulemaking was unlikely to survive judicial scrutiny, indicating that Chair Khan’s statements give her “no basis to believe that she will seek to ensure that proposed rule provisions fit within the Congressionally circumscribed jurisdiction of the FTC.”  

Commissioner Noah Phillips’ dissent criticized the ANPRM for failing to provide “notice of anything” and thus stripping the public of its participation rights. He argued that the ANPRM’s “myriad” questions appear to be a “mechanism to fish for legal theories that might justify outlandish regulatory ambition outside our jurisdiction.” He further noted that the rulemaking positions the FTC as a legislature to regulate in areas outside of its expertise (e.g., labor law) with potentially disastrous economic costs that it is ill-equipped to understand.

Commissioner Phillips further argued the ANPRM attacks disparate practices based on an “amalgam of cases concerning very different business models and conduct” that cannot show the prevalence of misconduct required for Section 18 rulemaking. He also criticized the FTC for abandoning its own informed-consent model based on paternalistic musings about individuals’ ability to decide for themselves. And finally, he criticized the FTC’s apparent overreach in claiming the mantle of “civil rights enforcer” when it was never given that explicit authority by Congress to declare discrimination or disparate impacts unlawful in this space. 

Implications for Regulated Entities and Others Concerned with Potential Agency Overreach

The sheer breadth of the ANPRM demands the avid attention of potentially regulated entities or those concerned with the FTC’s aggressive rulemaking agenda. The public should seek to meaningfully participate in the rulemaking process to ensure the FTC considers a broad array of viewpoints and has the facts before it necessary to properly define the scope of its own authority and the consequences of any proposed privacy regulation. For example, the FTC may issue a notice of proposed rulemaking defining acts or practices as unfair or deceptive “only where it has reason to believe that the unfair or deceptive acts or practices which are the subject of the proposed rulemaking are prevalent.”(emphasis added).

15 U.S. Code § 57a also states that the FTC may make a determination that unfair or deceptive acts or practices are prevalent only if:  “(A) it has issued cease and desist orders regarding such acts or practices, or (B) any other information available to the Commission indicates a widespread pattern of unfair or deceptive acts or practices.” That means that, under the Magnuson-Moss Section 18 rulemaking that the FTC must use here, the agency must show (1) the prevalence of the practices (2) how they are unfair or deceptive, and (3) the economic effect of the rule, including on small businesses and consumers. Any final regulatory analysis also must assess the rule’s costs and benefits and why it was chosen over alternatives. On each count, effective advocacy supported by empirical and sound economic analysis by the public may prove dispositive.

The FTC may have a particularly difficult time meeting this burden of proof with many of the innocuous (and currently permitted) practices identified in the ANPRM. For example, modern online commerce like automated decision-making is a part of the engine that has powered a decade of innovation, lowered logistical and opportunity costs, and opened up amazing new possibilities for small businesses seeking to serve local consumers and their communities. Commissioner Wilson makes this point well:

Many practices discussed in this ANPRM are presented as clearly deceptive or unfair despite the fact that they stretch far beyond practices with which we are familiar, given our extensive law enforcement experience. Indeed, the ANPRM wanders far afield of areas for which we have clear evidence of a widespread pattern of unfair or deceptive practices. 

The FTC also may be setting itself on an imminent collision course with the “major questions” doctrine, in particular. On the last day of its term this year, the Supreme Court handed down West Virginia v. Environmental Protection Agency, which applied the “major questions doctrine” to rule that the EPA can’t base its controversial Clean Power Plan on a novel interpretation of a relatively obscure provision of the Clean Air Act. An agency rule of such vast “economic and political significance,” Chief Justice John Roberts wrote, requires “clear congressional authorization.” (See “The FTC Heads for Legal Trouble” by Svetlana Gans and Eugene Scalia.) Parties are likely to argue the same holds true here with regard to the FTC’s potential regulatory extension into areas like anti-discrimination and labor law. If the FTC remains on this aggressive course, any final privacy rulemaking could also be a tempting target for a reinvigorated nondelegation doctrine.  

Some members of Congress also may question the wisdom of the ANPRM venturing into the privacy realm at all right now, a point advanced by several of the commissioners. Shortly after the FTC’s announcement, House Energy and Commerce Committee Chairman Frank Pallone Jr. (D-N.J.) stated:

I appreciate the FTC’s effort to use the tools it has to protect consumers, but Congress has a responsibility to pass comprehensive federal privacy legislation to better equip the agency, and others, to protect consumers to the greatest extent.

Sen. Roger Wicker (R-Miss.), the ranking member on the Senate Commerce Committee and a leading GOP supporter of the bipartisan legislation, likewise said that the FTC’s move helps “underscore the urgency for the House to bring [ADPPA]  to the floor and for the Senate Commerce Committee to advance it through committee.”  

The FTC’s ANPRM will likely have broad implications for the U.S. economy. Stakeholders can participate in the rulemaking in several ways, including registering by Aug. 31 to speak at the FTC’s Sept. 8 public forum. Stakeholders should also consider submitting public comments and empirical evidence within 60-days of the ANPRM’s publication in the Federal Register, and insist that the FTC hold informal hearings as required under the Magnuson-Moss Act.

While the FTC is rightfully the nation’s top consumer cop, an advanced notice of this scope demands active public awareness and participation to ensure the agency gets it right.  

 

Having earlier passed through subcommittee, the American Data Privacy and Protection Act (ADPPA) has now been cleared for floor consideration by the U.S. House Energy and Commerce Committee. Before the markup, we noted that the ADPPA mimics some of the worst flaws found in the European Union’s General Data Protection Regulation (GDPR), while creating new problems that the GDPR had avoided. Alas, the amended version of the legislation approved by the committee not only failed to correct those flaws, but in some cases it actually undid some of the welcome corrections that had been made to made to the original discussion draft.

Is Targeted Advertising ‘Strictly Necessary’?

The ADPPA’s original discussion draft classified “information identifying an individual’s online activities over time or across third party websites” in the broader category of “sensitive covered data,” for which a consumer’s expression of affirmative consent (“cookie consent”) would be required to collect or process. Perhaps noticing the questionable utility of such a rule, the bill’s sponsors removed “individual’s online activities” from the definition of “sensitive covered data” in the version of ADPPA that was ultimately introduced.

The manager’s amendment from Energy and Commerce Committee Chairman Frank Pallone (D-N.J.) reverted that change and “individual’s online activities” are once again deemed to be “sensitive covered data.” However, the marked-up version of the ADPPA doesn’t require express consent to collect sensitive covered data. In fact, it seems not to consider the possibility of user consent; firms will instead be asked to prove that their collection of sensitive data was a “strict necessity.”

The new rule for sensitive data—in Section 102(2)—is that collecting or processing such data is allowed “where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated” in Section 101(b) (though with exceptions—notably for first-party advertising and targeted advertising).

This raises the question of whether, e.g., the use of targeted advertising based on a user’s online activities is “strictly necessary” to provide or maintain Facebook’s social network? Even if the courts eventually decide, in some cases, that it is necessary, we can expect a good deal of litigation on this point. This litigation risk will impose significant burdens on providers of ad-supported online services. Moreover, it would effectively invite judges to make business decisions, a role for which they are profoundly ill-suited.

Given that the ADPPA includes the “right to opt-out of targeted advertising”—in Section 204(c)) and a special targeted advertising “permissible purpose” in Section 101(b)(17)—this implies that it must be possible for businesses to engage in targeted advertising. And if it is possible, then collecting and processing the information needed for targeted advertising—including information on an “individual’s online activities,” e.g., unique identifiers – Section 2(39)—must be capable of being “strictly necessary to provide or maintain a specific product or service requested by the individual.” (Alternatively, it could have been strictly necessary for one of the other permissible purposes from Section 101(b), but none of them appear to apply to collecting data for the purpose of targeted advertising).

The ADPPA itself thus provides for the possibility of targeted advertising. Therefore, there should be no reason for legal ambiguity about when collecting “individual’s online activities” is “strictly necessary to provide or maintain a specific product or service requested by the individual.” Do we want judges or other government officials to decide which ad-supported services “strictly” require targeted advertising? Choosing business models for private enterprises is hardly an appropriate role for the government. The easiest way out of this conundrum would be simply to revert back to the ill-considered extension of “sensitive covered data” in the ADPPA version that was initially introduced.

Developing New Products and Services

As noted previously, the original ADPPA discussion draft allowed first-party use of personal data to “provide or maintain a specific product or service requested by an individual” (Section 101(a)(1)). What about using the data to develop new products and services? Can a business even request user consent for that? Under the GDPR, that is possible. Under the ADPPA, it may not be.

The general limitation on data use (“provide or maintain a specific product or service requested by an individual”) was retained from the ADPPA original discussion in the version approved by the committee. As originally introduced, the bill included an exception that could have partially addressed the concern in Section 101(b)(2) (emphasis added):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to perform system maintenance or diagnostics, to maintain a product or service for which such data was collected, to conduct internal research or analytics, to improve a product or service for which such data was collected …

Arguably, developing new products and services largely involves “internal research or analytics,” which would be covered under this exception. If the business later wanted to invite users of an old service to use a new service, the business could contact them based on a separate exception for first-party marketing and advertising (Section 101(b)(11) of the introduced bill).

This welcome development was reversed in the manager’s amendment. The new text of the exception (now Section 101(b)(2)(C)) is narrower in a key way (emphasis added): “to conduct internal research or analytics to improve a product or service for which such data was collected.” Hence, it still looks like businesses will find it difficult to use first-party data to develop new products or services.

‘De-Identified Data’ Remains Unclear

Our earlier analysis noted significant confusion in the ADPPA’s concept of “de-identified data.” Neither the introduced version nor the markup amendments addressed those concerns, so it seems worthwhile to repeat and update the criticism here. The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. In the marked-up version, the definition covers: “information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to nonpersonal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” in Section 2(12) of the marked-up version:

  1. To take “reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual”;
  2. To publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and
    1. to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual;”
  3. To “contractually obligate[] any person or entity that receives the information from the covered entity or service provider” to comply with all of the same rules and to include such an obligation “in all subsequent instances for which the data may be received.”

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here, but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification.

Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is to effectively impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Fundamental Issues with Enforcement

One of the most important problems with the ADPPA is its enforcement provisions. Most notably, the private right of action creates pernicious incentives for excessive litigation by providing for both compensatory damages and open-ended injunctive relief. Small businesses have a right to cure before damages can be sought, but many larger firms are not given a similar entitlement. Given such open-ended provisions as whether using web-browsing behavior is “strictly necessary” to improve a product or service, the litigation incentives become obvious. At the very least, there should be a general opportunity to cure, particularly given the broad restrictions placed on essentially all data use.

The bill also creates multiple overlapping power centers for enforcement (as we have previously noted):

The bill carves out numerous categories of state law that would be excluded from pre-emption… as well as several specific state laws that would be explicitly excluded, including Illinois’ Genetic Information Privacy Act and elements of the California Consumer Privacy Act. These broad carve-outs practically ensure that ADPPA will not create a uniform and workable system, and could potentially render the entire pre-emption section a dead letter. As written, it offers the worst of both worlds: a very strict federal baseline that also permits states to experiment with additional data-privacy laws.

Unfortunately, the marked-up version appears to double down on these problems. For example, the bill pre-empts the Federal Communication Commission (FCC) from enforcing sections 222, 338(i), and 631 of the Communications Act, which pertain to privacy and data security. An amendment was offered that would have pre-empted the FCC from enforcing any provisions of the Communications Act (e.g., sections 201 and 202) for data-security and privacy purposes, but it was withdrawn. Keeping two federal regulators on the beat for a single subject area creates an inefficient regime. The FCC should be completely pre-empted from regulating privacy issues for covered entities.

The amended bill also includes an ambiguous provision that appears to serve as a partial carveout for enforcement by the California Privacy Protection Agency (CCPA). Some members of the California delegation—notably, committee members Anna Eshoo and Doris Matsui (both D-Calif.)—have expressed concern that the bill would pre-empt California’s own California Privacy Rights Act. A proposed amendment by Eshoo to clarify that the bill was merely a federal “floor” and that state laws may go beyond ADPPA’s requirements failed in a 48-8 roll call vote. However, the marked-up version of the legislation does explicitly specify that the CPPA “may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act.” How courts might interpret this language should the CPPA seek to enforce provisions of the CCPA that otherwise conflict with the ADPPA is unclear, thus magnifying the problem of compliance with multiple regulators.

Conclusion

As originally conceived, the basic conceptual structure of the ADPPA was, to a very significant extent, both confused and confusing. Not much, if anything, has since improved—especially in the marked-up version that regressed the ADPPA to some of the notably bad features of the original discussion draft. The rules on de-identified data are also very puzzling: their effect contradicts the basic principle of data minimization that the ADPPA purports to uphold. Those examples strongly suggest that the ADPPA is still far from being a properly considered candidate for a comprehensive federal privacy legislation.

European Union lawmakers appear close to finalizing a number of legislative proposals that aim to reform the EU’s financial-regulation framework in response to the rise of cryptocurrencies. Prominent within the package are new anti-money laundering and “countering the financing of terrorism” rules (AML/CFT), including an extension of the so-called “travel rule.” The travel rule, which currently applies to wire transfers managed by global banks, would be extended to require crypto-asset service providers to similarly collect and make available details about the originators and beneficiaries of crypto-asset transfers.

This legislative process proceeded with unusual haste in recent months, which partially explains why legal objections to the proposals have not been adequately addressed. The resulting legislation is fundamentally flawed to such an extent that some of its key features are clearly invalid under EU primary (treaty) law and liable to be struck down by the Court of Justice of the European Union (CJEU). 

In this post, I will offer a brief overview of some of the concerns, which I also discuss in this recent Twitter thread. I focus primarily on the travel rule, which—in the light of EU primary law—constitutes a broad and indiscriminate surveillance regime for personal data. This characterization also applies to most of AML/CFT.

The CJEU, the EU’s highest court, established a number of conditions that such legally mandated invasions of privacy must satisfy in order to be valid under EU primary law (the EU Charter of Fundamental Rights). The legal consequences of invalidity are illustrated well by the Digital Rights Ireland judgment, in which the CJEU struck down an entire piece of EU legislation (the Data Retention Directive). Alternatively, the CJEU could decide to interpret EU law as if it complied with primary law, even if that is contrary to the text.

The Travel Rule in the Transfer of Funds Regulation

The EU travel rule is currently contained in the 2015 Wire Transfer Regulation (WTR). But at the end of June, EU legislators reached a likely final deal on its replacement, the Transfer of Funds Regulation (TFR; see the original proposal from July 2021). I focus here on the TFR, but much of the argument also applies to the older WTR now in force. 

The TFR imposes obligations on payment-system providers and providers of crypto-asset transfers (refer to here, collectively, as “service providers”) to collect, retain, transfer to other service providers, and—in some cases—report to state authorities:

…information on payers and payees, accompanying transfers of funds, in any currency, and the information on originators and beneficiaries, accompanying transfers of crypto-assets, for the purposes of preventing, detecting and investigating money laundering and terrorist financing, where at least one of the payment or crypto-asset service providers involved in the transfer of funds or crypto-assets is established in the Union. (Article 1 TFR)

The TFR’s scope extends to money transfers between bank accounts or other payment accounts, as well as transfers of crypto assets other than peer-to-peer transfers without the involvement of a service provider (Article 2 TFR). Hence, the scope of the TFR includes, but is not limited to, all those who send or receive bank transfers. This constitutes the vast majority of adult EU residents.

The information that service providers are obligated to collect and retain (under Articles 4, 10, 14, and 21 TFR) include data that allow for the identification of both sides of a transfer of funds (the parties’ names, as well as the address, country, official personal document number, customer identification number, or the sender’s date and place of birth) and for linking their identity with the (payment or crypto-asset) account number or crypto-asset wallet address. The TFR also obligates service providers to collect and retain additional data to verify the accuracy of the identifying information “on the basis of documents, data or information obtained from a reliable and independent source” (Articles 4(4), 7(3), 14(5), 16(2) TFR). 

The scope of the obligation to collect and retain verification data is vague and is likely to lead service providers to require their customers to provide copies of passports, national ID documents, bank or payment-account statements, and utility bills, as is the case under the WTR and the 5th AML Directive. Such data is overwhelmingly likely to go beyond information on the civil identity of customers and will often, if not almost always, allow inferring even sensitive personal data about the customer.

The data-collection and retention obligations in the TFR are general and indiscriminate. No distinction is made in TFR’s data-collection and retention provisions based on likelihood of a connection with criminal activity, except for verification data in the case of transfers of funds (an exception not applicable to crypto assets). Even, the distinction in the case of verification data for transfers of funds (“has reasonable grounds for suspecting money laundering or terrorist financing”) arguably lacks the precision required under CJEU case law.

Analogies with the CJEU’s Passenger Name Records Decision

In late June, following its established approach in similar cases, the CJEU gave its judgment in the Ligue des droits humains case, which challenged the EU and Belgian regimes on passenger name records (PNR). The CJEU decided there that the applicable EU law, the PNR Directive, is valid under EU primary law. But it reached that result by interpreting some of the directive’s provisions in ways contrary to their express language and by deciding that some national legal rules implementing the directive are invalid. Some features of the PNR regime that were challenged by the court are strikingly similar to the TFR regime.

First, just like the TFR, the PNR rules imposed a five-year data-retention period for the data of all passengers, even where there is no “objective evidence capable of establishing a risk that relates to terrorist offences or serious crime having an objective link, even if only an indirect one, with those passengers’ air travel.” The court decided that this was a disproportionate restriction of the rights to privacy and to the protection of personal data under Articles 5-7 of the EU Charter of Fundamental Rights. Instead of invalidating the relevant article of the PNR Directive, the CJEU reinterpreted it as if it only allowed for five-year retention in cases where there is evidence of a relevant connection to criminality.

Applying analogous reasoning to the TFR, which imposes an indiscriminate five-year data retention period in its Article 21, the conclusion must be that this TFR provision is invalid under Articles 7-8 of the charter. Article 21 TFR may, at minimum, need to be recast to apply only to that transaction data where there is “objective evidence capable of establishing a risk” that it is connected to serious crime. The court also considered the issue of government access to data that has already been collected. Under the CJEU’s established interpretation of the EU Charter, “it is essential that access to retained data by the competent authorities be subject to a prior review carried out either by a court or by an independent administrative body.” In the PNR regime, at least some countries (such as Belgium) assigned this role to their “passenger information units” (PIUs). The court noted that a PIU is “an authority competent for the prevention, detection, investigation and prosecution of terrorist offences and of serious crime, and that its staff members may be agents seconded from the competent authorities” (e.g. from police or intelligence authorities). But according to the court:

That requirement of independence means that that authority must be a third party in relation to the authority which requests access to the data, in order that the former is able to carry out the review, free from any external influence. In particular, in the criminal field, the requirement of independence entails that the said authority, first, should not be involved in the conduct of the criminal investigation in question and, secondly, must have a neutral stance vis-a-vis the parties to the criminal proceedings …

The CJEU decided that PIUs do not satisfy this requirement of independence and, as such, cannot decide on government access to the retained data.

The TFR (especially its Article 19 on provision of information) does not provide for prior independent review of access to retained data. To the extent that such a review is conducted by Financial Intelligence Units (FIUs) under the AML Directive, concerns arise very similar to the treatment of PIUs under the PNR regime. While Article 32 of the AML Directive requires FIUs to be independent, that doesn’t necessarily mean that they are independent in the ways required of the authority that will decide access to retained data under Articles 7-8 of the EU Charter. For example, the AML Directive does not preclude the possibility of seconding public prosecutors, police, or intelligence officers to FIUs.

It is worth noting that none of the conclusions reached by the CJEU in the PNR case are novel; they are well-grounded in established precedent. 

A General Proportionality Argument

Setting aside specific analogies with previous cases, the TFR clearly has not been accompanied by a more general and fundamental reflection on the proportionality of its basic scheme in the light of the EU Charter. A pressing question is whether the TFR’s far-reaching restrictions of the rights established in Articles 7-8 of the EU Charter (and perhaps other rights, like freedom of expression in Article 11) are strictly necessary and proportionate. 

Arguably, the AML/CFT regime—including the travel rule—are significantly more costly and more rights-restricting than potential alternatives. The basic problem is that there is no reliable data on the relative effectiveness of measures like the travel rule. Defenders of the current AML/CFT regime focus on evidence that it contributes to preventing or prosecuting some crime. But this is not the relevant question when it comes to proportionality. The relevant question is whether those measures are as effective or more effective than alternative, less costly, and more privacy-preserving alternatives. One conservative estimate holds that AML compliance costs in Europe were “120 times the amount successfully recovered from criminals’ and exceeded the estimated total of criminal funds (including funds not seized or identified).” 

The fact that the current AML/CFT regime is a de facto global standard cannot serve as a sufficient justification either, given that EU fundamental law is perfectly comfortable in rejecting non-European law-enforcement practices (see the CJEU’s decision in Schrems). The travel rule has been unquestioningly imported to EU law from U.S. law (via FATF), where the standards of constitutional protection of privacy are much different than under the EU Charter. This fact would likely be noticed by the Court of Justice in any putative challenge to the TFR or other elements of the AML/CFT regime. 

Here, I only flag the possibility of a general proportionality challenge. Much more work needs to be done to flesh it out.

Conclusion

Due to the political and resource constraints of the EU legislative process, it is possible that the legislative proposals in the financial-regulation package did not receive sufficient legal scrutiny from the perspective of their compatibility with the EU Charter of Fundamental Rights. This hypothesis would explain the presence of seemingly clear violations, such as the indiscriminate five-year data-retention period. Given that none of the proposals has, as yet, been voted into law, making the legislators aware of the problem may help to address at least some of the issues.

Legal arguments about the AML/CFT regime’s incompatibility with the EU Charter should be accompanied with concrete alternative proposals to achieve the goals of preventing and combating serious crime that, according to the best evidence, the current AML/CFT regime does ineffectively. We need more regulatory imagination. For example, one part of the solution may be to properly staff and equip government agencies tasked with prosecuting financial crime.

But it’s also possible that the proposals, including the TFR, will be adopted broadly without amendment. In that case, the main recourse available to EU citizens (or to any EU government) will be to challenge the legality of the measures before the Court of Justice.

Fireworks came a bit early this year. Between the Supreme Court’s end-of-term decisions and this week’s January 6th Committee hearings, it wasn’t a week with much antitrust news coming out of either the FTC or Congress. But the Supreme Court’s made sure to keep things exciting: the opinion in West Virginia v. EPA case will reshape the regulatory landscape for years to come, including the world of antitrust.

This week’s headline is the WV v. EPA opinion. Nominally about the EPA’s efforts to regulate coal power plants, the opinion is really about the so-called major questions doctrine (MQD). Summarizing in a sentence a case that will be the subject of hundreds of law review articles and years of clarifying litigation, the MQD says that agencies can’t enact regulations of vast political or economic significance unless Congress clearly delegates them the authority and tools to do so. 

This outcome isn’t surprising – but it is nonetheless a big deal. For some general discussion, you could do worse than listening to Corbin Barthold and Berin Szóka dissecting the opinion in real-time. Focusing specifically on the FTC, commentators anticipating the ruling have argued that the MQD could substantially curtail the FTC’s UMC authority. Now that we have the opinion, that outcome seems likely confirmed.

The contours of the major questions doctrine are unclear. That is one of the most trenchant criticisms of the doctrine. But the Court’s opinion points to several factors beyond merely relating to a rule of “vast political or economic significance” (which remains the defining characteristic). Claiming new, or only rarely used, regulatory authority suggests a major question, especially if that authority would mark a “transformative expansion” in the agency’s authority. If the power is based in vague language or “ancillary provisions” of a statute suggests a major question. Or Congress having “conspicuously and repeatedly declined” to regulate the issue through legislation suggests a major question. All of these factors apply in the context of the FTC using its UMC authority, based the ancillary rulemaking authority of Section 6(g), to transformatively expand its authority to address any number of issues that are believed to be subject to FTC interest.

At the same time, those concerned about expansive UMC authority should not be too quick to think the UMC rulemaking project dead. The EPA and many other agencies to which the MQD is likely to apply, such as the FCC, have narrower scope than the FTC. While broad, the EPA’s authority is tailored to specific environmental issues; the FCC’s authority is tailored to specific communications technologies. Arguably, the FTC’s authority is more general than other agencies to which the MQD will clearly apply – unfair methods of competition can occur in any aspect of the economy.

Realistically, however, the prospects of the FTC surviving a MQD challenge if it pushes aggressive use of its UMC authority are slim. The bareness of the Section 6(g) rulemaking authority is challenge enough. But perhaps even more important is the theory underlying WV v. EPA and the MQD. Justice Roberts’s majority opinion invokes both separation of powers and legislative intent concerns. The MQD is about both whether Congress meant to, and whether it was appropriate for it to, delegate broad authority to an agency. It seems clear that if Congress wants to delegate substantial power to an agency that the Court expects Congress to be very clear about what that power is and how it is to be used. It is not enough to say “EPA, you regulate environmental stuff; FTC you regulate competition stuff.”

Turning now to other news. Can we call AICOA dead yet? Probably not, but time for Sen. Amy Klobuchar (D-MN) to save her American Innovation and Choice Online Act runs low. In addition to the academics, advocates, and Democratic senators (see last week’s Roundup for those details), social justice groups have joined the chorus expressing concerns about how AICOA might limit platforms’ ability to engage in content moderation. Alden Abbott has also brought focus to largely overlooked rule of law concerns raised by AICOA.

Speaking of other dead things, ADPPA seems to be spinning in its own grave. Late last week Sen. Maria Cantwell (D-WA), chair of the committee the bill would need to go through, said she has no plans to consider the bill in committee – and that Sen. Chuck Schumer (D-NY) has no interest in bringing it to the Senate floor. That sounds pretty dead. But the Court’s Dobbs opinion has made it deader. Over the weekend, a spokesperson for Cantwell “does not adequately protect against the privacy threats posed by a post-Roe world.” 

So, it seems likely the FTC remains the only potential privacy bulwark to which privacy advocates can turn. President Biden is already asking them to address Dobbs-related privacy issues. But query: would an FTC effort to develop rules to address privacy concerns present a major question – these are issues of longstanding Congressional debate and substantial economic and political importance? (I expect not; but I expect the issue could get into court.)

Some quick hits, literally. Today one forgets about the CFPB or its director, Rohit Chopra, at their peril. The Chamber of Commerce is trying to change this. ITIF’s Julie Carlson talks about the meteoric rise and fall of Lina Khan. The fall seems premature, but the WV v. EPA has certainly brought the ground closer. It may be a less literal hit, perhaps, but MLB’s antitrust exemption may be in its last innings. And where’s the beef? Price stabilization legislation is moving through the Senate Ag Committee.

Some parting thoughts? If you insist. Last week we mentioned this week’s Concurrences conference on the Rulemaking Authority of the FTC. It was a great event! Among other things, it introduced Dan Crane’s new, must-read, book on the topic, featuring chapters by a who’s-who of writers in the field. Several authors have previously contributed to the Truth on the Market symposium on the topic (hey, this post is part of that, too!) – and in the coming week we will have some more contributions from those authors.

Finally, a Friday afternoon read: Last week was Microsoft Internet Explorer’s last as a going concern. What can those concerned about big tech learn from the browser wars? Find out here.

The FTC UMC Roundup, part of the Truth on the Market FTC UMC Symposium, is a weekly roundup of news relating to the Federal Trade Commission’s antitrust and Unfair Methods of Competition authority. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.

Just three weeks after a draft version of the legislation was unveiled by congressional negotiators, the American Data Privacy and Protection Act (ADPPA) is heading to its first legislative markup, set for tomorrow morning before the U.S. House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee.

Though the bill’s legislative future remains uncertain, particularly in the U.S. Senate, it would be appropriate to check how the measure compares with, and could potentially interact with, the comprehensive data-privacy regime promulgated by the European Union’s General Data Protection Regulation (GDPR). A preliminary comparison of the two shows that the ADPPA risks adopting some of the GDPR’s flaws, while adding some entirely new problems.

A common misconception about the GDPR is that it imposed a requirement for “cookie consent” pop-ups that mar the experience of European users of the Internet. In fact, this requirement comes from a different and much older piece of EU law, the 2002 ePrivacy Directive. In most circumstances, the GDPR itself does not require express consent for cookies or other common and beneficial mechanisms to keep track of user interactions with a website. Website publishers could likely rely on one of two lawful bases for data processing outlined in Article 6 of the GDPR:

  • data processing is necessary in connection with a contractual relationship with the user, or
  • “processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (unless overridden by interests of the data subject).

For its part, the ADPPA generally adopts the “contractual necessity” basis for data processing but excludes the option to collect or process “information identifying an individual’s online activities over time or across third party websites.” The ADPPA instead classifies such information as “sensitive covered data.” It’s difficult to see what benefit users would derive from having to click that they “consent” to features that are clearly necessary for the most basic functionality, such as remaining logged in to a site or adding items to an online shopping cart. But the expected result will be many, many more popup consent queries, like those that already bedevil European users.

Using personal data to create new products

Section 101(a)(1) of the ADPPA expressly allows the use of “covered data” (personal data) to “provide or maintain a specific product or service requested by an individual.” But the legislation is murkier when it comes to the permissible uses of covered data to develop new products. This would only clearly be allowed where each data subject concerned could be asked if they “request” the specific future product. By contrast, under the GDPR, it is clear that a firm can ask for user consent to use their data to develop future products.

Moving beyond Section 101, we can look to the “general exceptions” in Section 209 of the ADPPA, specifically the exception in Section 209(a)(2)):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to perform system maintenance, diagnostics, maintain a product or service for which such covered data was collected, conduct internal research or analytics to improve products and services, perform inventory management or network management, or debug or repair errors that impair the functionality of a service or product for which such covered data was collected by the covered entity, except such data shall not be transferred.

While this provision mentions conducting “internal research or analytics to improve products and services,” it also refers to “a product or service for which such covered data was collected.” The concern here is that this could be interpreted as only allowing “research or analytics” in relation to existing products known to the data subject.

The road ends here for personal data that the firm collects itself. Somewhat paradoxically, the firm could more easily make the case for using data obtained from a third party. Under Section 302(b) of the ADPPA, a firm only has to ensure that it is not processing “third party data for a processing purpose inconsistent with the expectations of a reasonable individual.” Such a relatively broad “reasonable expectations” basis is not available for data collected directly by first-party covered entities.

Under the GDPR, aside from the data subject’s consent, the firm also could rely on its own “legitimate interest” as a lawful basis to process user data to develop new products. It is true, however, that due to requirements that the interests of the data controller and the data subject must be appropriately weighed, the “legitimate interest” basis is probably less popular in the EU than alternatives like consent or contractual necessity.

Developing this path in the ADPPA would arguably provide a more sensible basis for data uses like the reuse of data for new product development. This could be superior even to express consent, which faces problems like “consent fatigue.” These are unlikely to be solved by promulgating detailed rules on “affirmative consent,” as proposed in Section 2 of the ADPPA.

Problems with ‘de-identified data’

Another example of significant confusion in the ADPPA’s the basic conceptual scheme is the bill’s notion of “de-identified data.” The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. The definition covers: “information that does not identify and is not linked or reasonably linkable to an individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data, but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to non-personal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” (that is, all data that are not considered “covered” data):

  1. to take “reasonable measures to ensure that the information cannot, at any point, be used to re-identify any individual or device”;
  2. to publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de- identified form without any reasonable means for re- identification; and
    1. to not attempt to re-identify the information with any individual or device;”
  3. to “contractually obligate[] any person or entity that receives the information from the covered entity to comply with all of the” same rules.

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from effective attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification. Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is effectively to impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Conclusion

The basic conceptual structure of the legislation that subcommittee members will take up this week is, to a very significant extent, both confused and confusing. Perhaps in tomorrow’s markup, a more open and detailed discussion of what the drafters were trying to achieve could help to improve the scheme, as it seems that some key provisions of the current draft would lead to absurd results (e.g., those directly contrary to the principle of data minimization).

Given that the GDPR is already a well-known point of reference, including for U.S.-based companies and privacy professionals, the ADPPA may do better to re-use the best features of the GDPR’s conceptual structure while cutting its excesses. Re-inventing the wheel by proposing new concepts did not work well in this ADPPA draft.

Welcome to the FTC UMC Roundup for June 10, 2022. This is a week of headlines! One would be forgiven for assuming that our focus, once again, would on the American Innovation and Choice Online Act (AICOA). I heard on the radio yesterday that it’s champion, Sen. Amy Klobuchar (D-MN), has the 60 votes it needs to pass, and we are told the vote will be “quite soon.” Yet that is not our headline this week. So it goes in a busy week of news. 

This week’s headline is FTC Chair Lina Khan’s press tour–a clear sign of big things on the horizon. This past week she spoke with the AP, Axios, CNN, The Hill, Politico, Protocol, New York Times, Vox, Wall Street Journal, and Washington Post, and probably more. Almost a year to the day into her term as Chair, it seems she may have something to say? Yes: “There are [sic] a whole set of major policy initiatives that we have underway that we’re expecting will come to fruition over this next year.” 

The Chair’s press tour consistently struck several chords. She emphasized three priorities: merger guidelines and enforcement, regulating non-compete compete agreements, and privacy and security. In several interviews she discussed the use of both enforcement and rulemaking. It seems clear that a proposal for rules targeting non-compete agreements using the FTC’s unfair methods of competition (UMC) authority is imminent. It also seems likely that these rules will be modest. In several of the interviews Khan emphasized proceeding cautiously with respect to process. This speaks to one of the questions everyone has been asking: will Khan approach UMC rulemaking slowly, using modest initial rules to lay the groundwork to support more ambitious future rules but risking the clock on her term as Chair running out before much can be accomplished–or will she instead take a more aggressive approach, for instance by pushing ahead with a slate of proposed rules right out of the gate. We seem to have at least an initial answer: she hopes slow and steady will in the race.

Slow and steady doesn’t mean not aggressive. Khan’s interviews clearly suggest more aggressive merger enforcement moving forward–including potential challenges to mergers that have cleared the HSR review period. While not new news, Khan also made clear her preference to block transactions outright instead of allowing firms to cure potentially problematic parts of proposed deals. And she also discussed potential rulemaking relating to mergers. Perhaps most noteworthy was her discussion of “user privacy and commercial surveillance” in several interviews–including some in which it was unclear whether these concerns sounded in consumer protection or competition. The inclusion of “commercial surveillance” suggests a broader focus than traditional privacy concerns–perhaps including business models or competition in the advertising space.

Another theme was Khan’s blurred distinction between merely enforcing existing law and transforming the FTC. Her view is probably best described as neither and both: technology has transformed the economy and the FTC’s existing law is flexible enough to adapt to those changes. That, surely, will frame the central questions–likely to ultimately be answered by the courts–as the FTC charts a course across this sea of change: whether Congress empowered the FTC to regulate wherever the market took it and, if so, whether such power is too broad for Congress to have given to an agency.

That brings us to Congress. AICOA’s uncertain future remains uncertain. We can say with certainty that the bill has entered the proxy war phase. Supporters of the bill, having already played the “exclude favored industries from the bill” hand, are now targeting leadership directly. And industry still covered by the bill–if you can call a small number of individual firms an industry–is pulling out the lobbying stops, including getting the message out directly to consumers

If AICOA is to pass, it will do so upon a fragile coalition–at least 10 Republicans will need to cross party lines to support the legislation. Several Republicans seem poised to support the bill today, but will that be true tomorrow? Conservative voices including the Wall Street Journal are urging them not to. Not-so-conservative voices like Mike Masnick also raise concerns about the strange bedfellows needed to make the AICOA dream real. Both sides make the same point: Republican support for the bill comes from a belief that the bill addresses Republican concerns about censorship by BigTech. The Wall Street Journal argues that states are already addressing censorship concerns through narrower legislation that doesn’t risk the harm to innovation that AICOA could bring; Masnick warns Democrats that the Republican belief that AICOA could worsen the content moderation landscape is non-frivolous. 

With Republican support for the bill built on so soft a foundation–clearly not based on antitrust concerns–it is quite possible for it to shift quickly. Indeed, one wonders whether this fragile bipartisan coalition will survive the January 6th Committee hearings started this week.

Some quick hits before we leave. This was a busy week for the FTC in healthcare. Continuing its focus on PBMs in recent weeks, the FTC has now opened a probe of PBMs. And the Commission has sued to block multiple hospital mergers in New Jersey and Utah. There were several reminders that Elon Musk’s proposed acquisition of Twitter has passed the HSR’s review period without challenge–perhaps someone should remind reporters on the Elon beat that that won’t prevent the FTC from challenging the merger? And in case anyone is wondering whether a settlement is on the table for Facebook, Khan has made clear that the FTC will gladly settle with Facebook–Facebook just needs to accept all the FTC’s terms.  

A closing note: If you’re reading this on a lazy Friday afternoon in June and could use a good listen during lunch or on the commute home, you could do worse than listening to Richard Pierce, professor and Administrative Law guru, discuss whether administrative law allows the FTC to use rulemaking to change antitrust law.  

The FTC UMC Roundup, part of the Truth on the Market FTC UMC Symposium, is a weekly roundup of news relating to the Federal Trade Commission’s antitrust and Unfair Methods of Competition authority. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.

 

Welcome to the FTC UMC Roundup for June 3, 2023–Memorial Day week. The holiday meant we had a short week, but we still have plenty of news to share. It also means we’re now in meteorological summer, a reminder that the sands of legislative time run quickly through the hourglass. So it’s perhaps unsurprising that things continue to heat up on the legislative front, from antitrust to privacy and even some saber-rattling on remedies. Plus a fair bit of traditional-feeling action coming out of the FTC. Let’s jump in

At the Top

This week’s headline isn’t quite UMC- or even antitrust-related, but it’s headline-worthy nonetheless: after 14 years as COO of Facebook/Meta, Sheryl Sandberg has decided it’s time to lean her way out of the role. There aren’t obvious lines to read between with this departure–but it nonetheless marks a significant change to the organization and comes at a challenging time for the organization.

On the Hill

Turning to Congress, our first topic is Sen. Amy Klobuchar’s (D-MN) continued efforts to wrangle up enough support for the American Innovation and Choice Online Act (AICOA). The hold-up appears to be on the Democrat’s side of the aisle. Republican co-sponsor of the bill, Sen. Josh Hawley (R-Mo.), says of Democratic efforts to rally support that “they don’t think they have the votes.” Also on the topic of AICOA, the International Center for Law and Economics hosted a discussion about the legislation this past week. Lazar Radic offered a recap here, complete with a link to the recording. 

Reuters reports that Big Tech is ramping up efforts against AICOA. A spokesperson for Senator Klobuchar responded to a statement released by Amazon by asking “Who do you trust?” Well, Big Tech over Congress by a 2.5-to-1 margin, with a majority of Americans disfavoring increased regulation of Big Tech. The “who do you trust” question was actually focusing on concerns that some small businesses have shared about Amazon. How would AICOA affect small business? Geoff Manne weighs in, discussing the harm that AICOA could bring to the startup and venture capital markets.

AICOA isn’t the only bill making the rounds this week. A bipartisan privacy bill came out of left field, which is also where it seems likely to stay, with Sen. Brain Schatz (D-Hawaii) sending a letter to the Senate Commerce Committee “begging them to pump the brakes” on the bill. What’s the concern? Well, the bill is a compromise–one side agreed to preempt state privacy legislation in exchange for getting a private right of action. Sen. Schatz, likely along with many others, isn’t willing to lose existing state legislation. The bill is likely DOA in this Congress; probably even more DOA post-2022. 

Other legislative news includes another bipartisan bill that would streamline permitting for certain tech industries. Ultimately proposed in the interest of supply-chain resilience and on-shoring critical industries, this seems to set the stage for future “left hand vs. right hand” industrial policy. (D-Georgia) has 

At the Agencies

While most of this week’s news has been focused on Congress, the FTC and DOJ have been busy as well. Bloomberg reports on the increased attention the FTC is giving to Amazon, including some details about how resources allocated to the investigation have changed and that John Newman is leading the charge within the agency. And there are rumblings that the FTC could still challenge the Amazon-MGM deal, even post-closing. 

DOJ and the FTC have announced a June 14/15 workshop “to explore new approaches to enforcing the antitrust laws in the pharmaceutical industry.” Despite the curious phrasing (there aren’t that many ways to enforce a law!) this event could provide insight into the FTC’s thinking about potential UMC rulemaking. 

Binyamin Applebaum has an interesting NY Times opinion piece arguing that President Biden needs to appoint more judges with antitrust expertise to the bench. The lack of antitrust and regulatory expertise among Biden’s appointees to date is notable. Of course, Applebaum likely has a different sort of “antitrust expertise” in mind than most antitrust experts do. As Brian Albrecht writes in his own National Review op-ed, “Antitrust is Easy (When you Think You Know All the Answers).”

The “we need more judges” argument juxtaposes with AAG Kanter’s recent comments that he wants to bring cases, lots and lots of cases. “If we don’t go to court, then we’re regulators, not enforcers,” he recently commented at a University of Chicago conference. That is his approach to “the need to update and adapt our antitrust enforcement to address new market realities.” It remains to be seen how the courts will respond. Regardless, it is refreshing to see a preference for the antitrust laws to be enforced through the Article III courts.

Closing Notes

If you’re looking for some distraction on your commute home, we have two recommendations this week. The top choice is the Tech Policy Podcast discussion with FTC Commissioner Noah Phillips. And when you’re done with that, Mark Jamison will point you to an AEI discussion with Howard Beales, former FTC Chair Tim Muris, and former FTC Commissioner and Acting Chair Maureen K. Ohlhausen.

The FTC UMC Roundup, part of the Truth on the Market FTC UMC Symposium, is a weekly roundup of news relating to the Federal Trade Commission’s antitrust and Unfair Methods of Competition authority. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.

Welcome to the Truth on the Market FTC UMC Roundup for May 27, 2022. This week we have (Hail Mary?) revisions to Sen. Amy Klobuchar’s (D-Minn.) American Innovation and Choice Online Act, initiatives that can’t decide whether they belong in Congress or the Federal Trade Commission, and yet more commentary on inflation and antitrust, along with a twist ending.

This Week’s Headline

Sen. Klobuchar has shared a revised version of her proposed American Innovation and Choice Online Act. What’s different? Not much. The main change is that several industries—banks and telecom, notably—are excluded from coverage. That was probably an effort to win some Republican votes for the bill. But headed into the midterms. it appears some congressional Democrats view this more as a poison pill than a good bill—one they don’t think their constituents are willing to swallow.

Back at the FTC, the commission has announced that it will investigate the recent shortage of infant formula. This could focus on both consumer protection and competition issues. The market for infant formula in the United States is both fairly concentrated and also highly regulated. There are lots of interesting issues here (reminder to any academics reading this, we have an open call for papers for research relating to market-structuring regulation). 

The blurry line between FTC and Congress remains blurry. The FTC’s call for comments relating to pharmacy benefit managers (PBMs) closed this week, with more than 500 comments, at the same time that bipartisan legislation relating to PBMs has been introduced. And Sens. Mike Rounds (R-S.D.) and Elizabeth Warren (D-Mass.) want the FTC to investigate price fixing in the beef industry.

Concentrating a bit on big-picture policy issues, the number of friends Larry Summers has in the White House is shrinking faster than the dollar, as he worries about the embrace of “hipster antitrust,” including that the administration’s antitrust policy is driving inflation. On the other side of the inflation-antitrust ledger, economists at the Boston Federal Reserve Bank released a paper arguing that high concentration increases inflation. Among others, ICLE Chief Economist Brian Albrecht calls foul. Still on the inflation beat, it’s no secret that the biggest tech companies hold a lot of cash. Some may wonder, with the cost of holding cash so high, is a buying spree on the horizon? (Answer: not if the FTC keeps holding up mergers!)

A Few Quick Hits

Former FTC Commissioner Josh Wright and former commission staffer Derek Moore reflect on FTC morale. And Howard Beales and former FTC Chair Tim Muris wonder whether the “national nanny” is back on the beat.

It’s consumer protection, not antitrust, news but Twitter has been hit with a $150 million fine for doing bad stuff with user data between 2013 and 2019. Perhaps DuckDuckGo will be up next for the FTC. It turns out that the browser built on promises that it doesn’t track you has a deal with Microsoft to let Microsoft track you. That gives us an excuse to mention the FTC’s call for presentations for PrivacyCon 2022.

In international news, the United Kingdom’s Competition and Markets Authority has opened a second investigation into Google’s AdTech practices. And Shane Tewes of the American Enterprise Institute has a nice discussion with Peter Brown from the European Paliament’s liaison office about American versus European approaches to technology policy.

We close with a twist ending: One of the concerns that critics of the FTC’s newfound embrace of its UMC authority have is that expansive vague authority given to regulators enables a flabby useless government that is paradoxically too powerful. Which is why it’s interesting to see Matt Stoller of the American Economic Liberties Project, of all people, express that concern. Strange bedfellows indeed!

The FTC UMC Roundup, part of the Truth on the Market FTC UMC Symposium, is a weekly roundup of news relating to the Federal Trade Commission’s antitrust and Unfair Methods of Competition authority. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.

[The following is a guest post from Andrew Mercado, a research assistant at the Mercatus Center at George Mason University and an adjunct professor and research assistant at George Mason’s Antonin Scalia Law School.]

Barry Schwartz’s seminal work “The Paradox of Choice” has received substantial attention since its publication nearly 20 years ago. In it, Schwartz argued that, faced with an ever-increasing plethora of products to choose from, consumers often feel overwhelmed and seek to limit the number of choices they must make.

In today’s online digital economy, a possible response to this problem is for digital platforms to use consumer data to present consumers with a “manageable” array of choices and thereby simplify their product selection. Appropriate “curation” of product-choice options may substantially benefit consumer welfare, provided that government regulators stay out of the way.   

New Research

In a new paper in the American Economic Review, Mark Armstrong and Jidong Zhou—of Oxford and Yale universities, respectively—develop a theoretical framework to understand how companies compete using consumer data. Their findings conclude that there is, in fact, an impact on consumer, producer, and total welfare when different privacy regimes are enacted to change the amount of information a company can use to personalize recommendations.

The authors note that, at least in theory, there is an optimal situation that maximizes total welfare (scenario one). This is when a platform can aggregate information on consumers to such a degree that buyers and sellers are perfectly matched, leading to consumers buying their first-best option. While this can result in marginally higher prices, understandably leading to higher welfare for producers, search and mismatch costs are minimized by the platform, leading to a high level of welfare for consumers.

The highest level of aggregate consumer welfare comes when product differentiation is minimized (scenario two), leading to a high number of substitutes and low prices. This, however, comes with some level of mismatch. Since consumers are not matched with any recommendations, search costs are high and introduce some error. Some consumers may have had a higher level of welfare with an alternative product, but do not feel the negative effects of such mismatch because of the low prices. Therefore, consumer welfare is maximized, but producer welfare is significantly lower.

Finally, the authors suggest a “nearly total welfare” optimal solution in suggesting a “top two-best” scheme (scenario three), whereby consumers are shown their top two best options without explicit ranking. This nearly maximizes total welfare, since consumers are shown the best options for them and, even if the best match isn’t chosen, the second-best match is close in terms of welfare.

Implications

In cases of platform data aggregation and personalization, scenarios one, two, and three can be represented as different privacy regimes.

Scenario one (a personalized-product regime) is akin to unlimited data gathering, whereby platforms can use as much information as is available to perfectly suggest products based on revealed data. From a competition perspective, interfirm competition will tend to decrease under this regime, since product differentiation will be accentuated, and substitutability will be masked. Since one single product will be shown as the “correct” product, the consumer will not want to shift to a different, welfare-inferior product and firms have incentive to produce ever more specialized products for a relatively higher price. Total welfare under this regime is maximized, with producers using their information to garner a relatively large share of economic surplus. Producers are effectively matched with consumers, and all gains from trade are realized.

Scenario two (a data-privacy regime) is one of near-perfect data privacy, whereby the platform is only able to recommend products based on general information, such as sales trends, new products, or product specifications. Under this regime, competition is maximized, since consumers consider a large pool of goods to be close substitutes. Differences in offered products are downplayed, which has the tendency to reduce prices and increase quality, but at the tradeoff of some consumer-product mismatch. For consumers who want a general product and a low price, this is likely the best option, since prices are low, and competition is high. However, for consumers who want the best product match for their personal use case, they will likely undertake search costs, increasing their opportunity cost of product acquisition and tending toward a total cost closer to the cost under a personalized-product regime.

Scenario three (a curated-list regime) represents defined guardrails surrounding the display of information gathered, along the same lines as the personalized-product regime. Platforms remain able to gather as much information as they desire in order to make a personalized recommendation, but they display an array of products that represent the first two (or three to four, with tighter anti-preference rules) best-choice options. These options are displayed without ranking the products, allowing the consumer to choose from a curated list, rather than a single product. The scenario-three regime has two effects on the market:

  1. It will tend to decrease prices through increased competition. Since firms can know only which consumers to target, not which will choose the product, they have to effectively compete with closely related products.
  2. It will likely spur innovation and increase competition from nascent competitors.

From an innovation perspective, firms will have to find better methods to differentiate themselves from the competition, increasing the probability of a consumer acquiring their product. Also, considering nascent competitors, a new product has an increased chance of being picked when ranked sufficiently high to be included on the consumer’s curated list. In contrast, the probability of acquisition under scenario one’s personalized-product regime is low, since the new product must be a better match than other, existing products. Similarly, under scenario two’s data-privacy regime, there is so much product substitutability in the market that the probability of choosing any one new product is low.

Below is a list of how the regimes stack up:

  • Personalized-Product: Total welfare is maximized, but prices are relatively higher and competition is relatively lower than under a data-privacy regime.
  • Data-Privacy: Consumer welfare and competition are maximized, and prices are theoretically minimized, but at the cost of product mismatch. Consumers will face search costs that are not reflected in the prices paid.
  • Curated-List: Consumer welfare is higher and prices are lower than under a personalized-product regime and competition is lower than under a data-privacy regime, but total welfare is nearly optimal when considering innovation and nascent-competitor effects.

Policy in Context

Applying these theoretical findings to fashion administrable policy prescriptions is understandably difficult. A far easier task is to evaluate the welfare effects of actual and proposed government privacy regulations in the economy. In that light, I briefly assess a recently enacted European data-platform privacy regime and U.S. legislative proposals that would restrict data usage under the guise of bans on “self-preferencing.” I then briefly note the beneficial implications of self-preferencing associated with the two theoretical data-usage scenarios (scenarios one and three) described above (scenario two, data privacy, effectively renders self-preferencing ineffective). 

GDPR

The European Union’s General Data Protection Regulation (GDPR)—among the most ambitious and all-encompassing data-privacy regimes to date—has significant negative ramifications for economic welfare. This regulation is most like the second scenario, whereby data collection and utilization are seriously restricted.

The GDPR diminishes competition through its restrictions on data collection and sharing, which reduce the competitive pressure platforms face. For platforms to gain a complete profile of a consumer for personalization, they cannot only rely on data collected on their platform. To ensure a level of personalization that effectively reduces search costs for consumers, these platforms must be able to acquire data from a range of sources and aggregate that data to create a complete profile. Restrictions on aggregation are what lead to diminished competition online.

The GDPR grants consumers the right to choose both how their data is collected and how it is distributed. Not only do platforms themselves have obligations to ensure consumers’ wishes are met regarding their privacy, but firms that sell data to the platform are obligated to ensure the platform does not infringe consumers’ privacy through aggregation.

This creates a high regulatory burden for both the platform and the data seller and reduces the incentive to transfer data between firms. Since the data seller can be held liable for actions taken by the platform, this significantly increases the price at which the data seller will transfer the data. By increasing the risk of regulatory malfeasance, the cost of data must now incorporate some risk premium, reducing the demand for outside data.

This has the effect of decreasing the quality of personalization and tilting the scales toward larger platforms, who have more robust data-collection practices and are able to leverage economies of scale to absorb high regulatory-enforcement costs. The quality of personalization is decreased, since the platform has incentive to create a consumption profile based on activity it directly observes without considering behavior occurring outside of the platform. Additionally, those platforms that are already entrenched and have large user bases are better able to manage the regulatory burden of the GDPR. One survey of U.S. companies with more than 500 workers found that 68% planned to spend between $1 and $10 million in upfront costs to prepare for GDPR compliance, a number that will likely pale in comparison to the long-term compliance costs. For nascent competitors, this outlay of capital represents a significant barrier to entry.

Additionally, as previously discussed, consumers derive some benefit from platforms that can accurately recommend products. If this is the case, then large platforms with vast amounts of accumulated, first-party data will be the consumers’ destination of choice. This will tend to reduce the ability for smaller firms to compete, simply because they do not have access to the same scale of data as the large platforms when data cannot be easily transferred between parties.

SelfPreferencing

Claims of anticompetitive behavior by platforms are abundant (e.g., see here and here), and they often focus on the concept of self-preferencing. Self-preferencing refers to when a company uses its economies of scale, scope, or a combination of the two to offer products at a lower price through an in-house brand. In decrying self-preferencing, many commentators and politicians point to an alleged “unfair advantage” in tech platforms’ ability to leverage data and personalization to drive traffic toward their own products.

It is far from clear, however, that this practice reduces consumer welfare. Indeed, numerous commentaries (e.g., see here and here) circulated since the introduction of anti-preferencing bills in the U.S. Congress (House; Senate) have rejected the notion that self-preferencing is anti-competitive or anti-consumer.

There are good reasons to believe that self-preferencing promotes both competition and consumer welfare. Assume that a company that manufactures or contracts for its own, in-house products can offer them at a marginally lower price for the same relative quality. This decrease in price raises consumer welfare. The in-house brand’s entrance into the market represents a potent competitive threat to firms already producing products, who in turn now have incentive to lower their own prices or raise the quality of their own goods (or both) to maintain their consumer base. This creates even more consumer welfare, since all consumers, not just the ones purchasing the in-house goods, are better off from the entrance of an in-house brand.

It therefore follows that the entrance of an in-house brand and self-preferencing in the data-utilizing regimes discussed above has the potential to enhance consumer welfare.

In general, the use of data analysis on the platform can allow for targeted product entrance into certain markets. If the platform believes it can make a product of similar quality for a lower price, then it will enter that market and consumers will be able to choose a comparable product for a lower price. (If the company does not believe it is able to produce such a product, it will not enter the market with an in-house brand, and consumer welfare will stay the same.) Consumer welfare will further rise as firms producing products that compete against the in-house brand will innovate to compete more effectively.

To be sure, under a personalized-product regime (scenario one), platforms may appear to have an incentive to self-preference to the detriment of consumers. If consumers trust the platform to show the greatest welfare-producing product before the emergence of an in-house brand, the platform may use this consumer trust to its advantage and suggest its own, potentially consumer-welfare-inferior product instead of a competitor’s welfare-superior product. In such a case, consumer welfare may decrease in the face of an in-house brand’s entrance.

The extent of any such welfare loss, however, may be ameliorated (or eliminated entirely) by the platform’s concern that an unexpectedly low level of house-brand product quality will diminish its reputation. Such a reputational loss could come about due to consumer disappointment, plus the efforts of platform rivals to highlight the in-house product’s inferiority. As such, the platform might decide to enhance the quality of its “inferior” in-house offering, or refrain from offering an in-house brand at all.

A curated-list regime (scenario three) is unequivocally consumer-welfare beneficial. Under such a regime, consumers will be shown several more options (a “manageable” number intended to minimize consumer-search costs) than under a personalized-product regime. Consumers can actively compare the offerings from different firms to determine the correct product for their individual use. In this case, there is no incentive to self-preference to the detriment of the consumer, as the consumer is able to make value judgements between the in-house brand and the alternatives.

If the in-house brand is significantly lower in price, but also lower in quality, consumers may not see the two as interchangeable and steer away from the in-house brand. The same follows when the in-house brand is higher in both price and quality. The only instance where the in-house brand has a strong chance of success is when the price is lower than and the quality is greater than competing products. This will tend to increase consumer welfare. Additionally, the entrance of consumer-welfare-superior products into a competitive market will encourage competing firms to innovate and lower prices or raise quality, again increasing consumer welfare for all consumers.

Conclusion

What effects do digital platform-data policies have on consumer welfare? As a matter of theory, if providing an increasing number of product choices does not tend to increase consumer welfare, then do reductions in prices or increases in quality? What about precise targeting of personal-product choices? How about curation—the idea that a consumer raises his or her level of certainty by outsourcing decision-making to a platform that chooses a small set of products for the consumer’s consideration at any given moment? Apart from these theoretical questions, is the current U.S. legal treatment of platform data usage doing a generally good job of promoting consumer welfare? Finally, considering this overview, are new government interventions in platform data policy likely to benefit or harm consumers?

Recently published economic research develops theoretical scenarios that demonstrate how digital platform curation of consumer data may facilitate welfare-enhancing consumer-purchase decisions. At least implicitly, this research should give pause to proponents of major new restrictions of platform data usage.

Furthermore, a review of actual and proposed regulatory restrictions underscores the serious welfare harm of government meddling in digital platform-data usage.   

After the first four years of GDPR, it is clear that there have been significant negative unintended consequences stemming from omnibus privacy regulation. Competition has decreased, regulatory barriers to entry have increased, and consumers are marginally worse off. Since companies are less able and willing to leverage data in their operations and service offerings—due in large part to the risk of hefty fines—they are less able to curate and personalize services to consumers.

Additionally, anti-preferencing bills in the United States threaten to suppress the proper functioning of platform markets and reduce consumer welfare by making the utilization of data in product-market decisions illegal. More research is needed to determine the aggregate welfare effects of such preferencing on platforms, but all early indications point to the fact that consumers are better off when an in-house brand enters the market and increases competition.

Furthermore, current U.S. government policy, which generally allows platforms to use consumer data freely, is good for consumer welfare. Indeed, the consumer-welfare benefits generated by digital platforms, which depend critically on large volumes of data, are enormous. This is documented in a well-reasoned Harvard Business Review article (by an MIT professor and his student) that utilizes online choice experiments based on digital-survey techniques.

The message is clear. Governments should avoid new regulatory meddling in digital platform consumer-data usage practices. Such meddling would harm consumers and undermine the economy.

Though details remain scant (and thus, any final judgment would be premature),  initial word on the new Trans-Atlantic Data Privacy Framework agreed to, in principle, by the White House and the European Commission suggests that it could be a workable successor to the Privacy Shield agreement that was invalidated by the Court of Justice of the European Union (CJEU) in 2020.

This new framework agreement marks the third attempt to create a lasting and stable legal regime to permit the transfer of EU citizens’ data to the United States. In the wake of the 2013 revelations by former National Security Agency contractor Edward Snowden about the extent of the United States’ surveillance of foreign nationals, the CJEU struck down (in its 2015 Schrems decision) the then-extant “safe harbor” agreement that had permitted transatlantic data flows. 

In the 2020 Schrems II decision (both cases were brought by Austrian privacy activist Max Schrems), the CJEU similarly invalidated the Privacy Shield, which had served as the safe harbor’s successor agreement. In Schrems II, the court found that U.S. foreign surveillance laws were not strictly proportional to the intelligence community’s needs and that those laws also did not give EU citizens adequate judicial redress.  

This new “Privacy Shield 2.0” agreement, announced during President Joe Biden’s recent trip to Brussels, is intended to address the issues raised in the Schrems II decision. In relevant part, the joint statement from the White House and European Commission asserts that the new framework will: “[s]trengthen the privacy and civil liberties safeguards governing U.S. signals intelligence activities; Establish a new redress mechanism with independent and binding authority; and Enhance its existing rigorous and layered oversight of signals intelligence activities.”

In short, the parties believe that the new framework will ensure that U.S. intelligence gathering is proportional and that there is an effective forum for EU citizens caught up in U.S. intelligence-gathering to vindicate their rights.

As I and my co-authors (my International Center for Law & Economics colleague Mikołaj Barczentewicz and Michael Mandel of the Progressive Policy Institute) detailed in an issue brief last fall, the stakes are huge. While the issue is often framed in terms of social-media use, transatlantic data transfers are implicated in an incredibly large swath of cross-border trade:

According to one estimate, transatlantic trade generates upward of $5.6 trillion in annual commercial sales, of which at least $333 billion is related to digitally enabled services. Some estimates suggest that moderate increases in data-localization requirements would result in a €116 billion reduction in exports from the EU.

The agreement will be implemented on this side of the Atlantic by a forthcoming executive order from the White House, at which point it will be up to EU courts to determine whether the agreement adequately restricts U.S. intelligence activities and protects EU citizens’ rights. For now, however, it appears at a minimum that the White House took the CJEU’s concerns seriously and made the right kind of concessions to reach agreement.

And now, once the framework is finalized, we just have to sit tight and wait for Mr. Schrems’ next case.

There has been a wave of legislative proposals on both sides of the Atlantic that purport to improve consumer choice and the competitiveness of digital markets. In a new working paper published by the Stanford-Vienna Transatlantic Technology Law Forum, I analyzed five such bills: the EU Digital Services Act, the EU Digital Markets Act, and U.S. bills sponsored by Rep. David Cicilline (D-R.I.), Rep. Mary Gay Scanlon (D-Pa.), Sen. Amy Klobuchar (D-Minn.) and Sen. Richard Blumenthal (D-Conn.). I concluded that all those bills would have negative and unaddressed consequences in terms of information privacy and security.

In this post, I present the main points from the working paper regarding two regulatory solutions: (1) mandating interoperability and (2) mandating device neutrality (which leads to a possibility of sideloading applications, a special case of interoperability.) The full working paper  also covers the risks of compulsory data access (by vetted researchers or by authorities).

Interoperability

Interoperability is increasingly presented as a potential solution to some of the alleged problems associated with digital services and with large online platforms, in particular (see, e.g., here and here). For example, interoperability might allow third-party developers to offer different “flavors” of social-media newsfeeds, with varying approaches to content ranking and moderation. This way, it might matter less than it does now what content moderation decisions Facebook or other platforms make. Facebook users could choose alternative content moderators, delivering the kind of news feed that those users expect.

The concept of interoperability is popular not only among thought leaders, but also among legislators. The DMA, as well as the U.S. bills by Rep. Scanlon, Rep. Cicilline, and Sen. Klobuchar, all include interoperability mandates.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. It is telling that supporters of interoperability mandates use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns, due to its prioritization of interoperability (see, e.g., here).

Using currently available technology to provide alternative interfaces or moderation services for social-media platforms, third-party developers would have to be able to access much of the platform content that is potentially available to a user. This would include not just content produced by users who explicitly agree to share their data with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It does not require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

There are several constraints for interoperability frameworks that must be in place to safeguard privacy and security effectively.

First, solutions should be targeted toward real users of digital services, without assuming away some common but inconvenient characteristics. In particular, solutions should not assume unrealistic levels of user interest and technical acumen.

Second, solutions must address the issue of effective enforcement. Even the best information privacy and security laws do not, in and of themselves, solve any problems. Such rules must be followed, which requires addressing the problems of procedure and enforcement. In both the EU and the United States, the current framework and practice of privacy law enforcement offers little confidence that misuses of broadly construed interoperability would be detected and prosecuted, much less that they would be prevented. This is especially true for smaller and “judgment-proof” rulebreakers, including those from foreign jurisdictions.

If the service providers are placed under a broad interoperability mandate with non-discrimination provisions (preventing effective vetting of third parties, unilateral denials of access, and so on), then the burden placed on law enforcement will be mammoth. Just one bad actor, perhaps working from Russia or North Korea, could cause immense damage by taking advantage of interoperability mandates to exfiltrate user data or to execute a hacking (e.g., phishing) campaign. Of course, such foreign bad actors would be in violation of the EU GDPR, but that is unlikely to have any practical significance.

It would not be sufficient to allow (or require) service providers to enforce merely technical filters, such as a requirement to check whether the interoperating third parties’ IP address comes from a jurisdiction with sufficient privacy protections. Working around such technical limitations does not pose a significant difficulty to motivated bad actors.

Art 6(1) of the original DMA proposal included some general interoperability provisions applicable to “gatekeepers”—i.e., the largest online platforms. Those interoperability mandates were somewhat limited – applying only to “ancillary services” (e.g., payment or identification services) or requiring only one-way data portability. However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately.

The drafts of the DMA adopted by the European Council and by the European Parliament attempt to address that, but they only allow gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament). This standard may be too high and could push gatekeepers to offer lower security to avoid liability for adopting measures that would be judged by EU institutions and the courts as going beyond what is strictly necessary or indispensable.

The more recent DMA proposal from the European Parliament goes significantly beyond the original proposal, mandating full interoperability of a number of “independent interpersonal communication services” and of social-networking services. The Parliament’s proposals are good examples of overly broad and irresponsible interoperability mandates. They would cover “any providers” wanting to interconnect with gatekeepers, without adequate vetting. The safeguard proviso mentioning “high level of security and personal data protection” does not come close to addressing the seriousness of the risks created by the mandate. Instead of facing up to the risks and ensuring that the mandate itself be limited in ways that minimize them, the proposal seems just to expect that the gatekeepers can solve the problems if they only “nerd harder.”

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

restrict or impede the capacity of a business user to access or interoperate with the same platform, operating system, hardware and software features that are available to the covered platform operator’s own products, services, or lines of business.

The language of the prohibition in Sen. Klobuchar’s American Innovation and Choice Online Act (AICOA) is similar (also in Section 2(b)(1)). Both ACIOA and AICOA allow for affirmative defenses that a service provider could use if sued under the statute. While those defenses mention privacy and security, they are narrow (“narrowly tailored, could not be achieved through a less discriminatory means, was nonpretextual, and was necessary”) and would not prevent service providers from incurring significant litigation costs. Hence, just like the provisions of the DMA, they would heavily incentivize covered service providers not to adopt the most effective protections of privacy and security.

Device Neutrality (Sideloading)

Article 6(1)(c) of the DMA contains specific provisions about “sideloading”—i.e., allowing installation of third-party software through alternative app stores other than the one provided by the manufacturer (e.g., Apple’s App Store for iOS devices). A similar express provision for sideloading is included in Sen. Blumenthal’s Open App Markets Act (Section 3(d)(2)). Moreover, the broad interoperability provisions in the other U.S. bills discussed above may also be interpreted to require permitting sideloading.

A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS). By taking away the choice of a walled garden environment, a sideloading mandate will effectively force users to use whatever alternative app stores are preferred by particular app developers. App developers would have strong incentive to set up their own app stores or to move their apps to app stores with the least friction (for developers, not users), which would also mean the least privacy and security scrutiny.

This is not to say that Apple’s app scrutiny is perfect, but it is reasonable for an ordinary user to prefer Apple’s approach because it provides greater security (see, e.g., here and here). Thus, a legislative choice to override the revealed preference of millions of users for a “walled garden” approach should not be made lightly. 

Privacy and security safeguards in the DMA’s sideloading provisions, as amended by the European Council and by the European Parliament, as well as in Sen. Blumenthal’s Open App Markets Act, share the same problem of narrowness as the safeguards discussed above.

There is a more general privacy and security issue here, however, that those safeguards cannot address. The proposed sideloading mandate would prohibit outright a privacy and security-protection model that many users rationally choose today. Even with broader exemptions, this loss will be genuine. It is unclear whether taking away this choice from users is justified.

Conclusion

All the U.S. and EU legislative proposals considered here betray a policy preference of privileging uncertain and speculative competition gains at the expense of introducing a new and clear danger to information privacy and security. The proponents of these (or even stronger) legislative interventions seem much more concerned, for example, that privacy safeguards are “not abused by Apple and Google to protect their respective app store monopoly in the guise of user security” (source).

Given the problems with ensuring effective enforcement of privacy protections (especially with respect to actors coming from outside the EU, the United States, and other broadly privacy-respecting jurisdictions), the lip service paid by the legislative proposals to privacy and security is not much more than that. Policymakers should be expected to offer a much more detailed vision of concrete safeguards and mechanisms of enforcement when proposing rules that come with significant and entirely predictable privacy and security risks. Such vision is lacking on both sides of the Atlantic.

I do not want to suggest that interoperability is undesirable. The argument of this paper was focused on legally mandated interoperability. Firms experiment with interoperability all the time—the prevalence of open APIs on the Internet is testament to this. My aim, however, is to highlight that interoperability is complex and exposes firms and their users to potentially large-scale cyber vulnerabilities.

Generalized obligations on firms to open their data, or to create service interoperability, can short-circuit the private ordering processes that seek out those forms of interoperability and sharing that pass a cost-benefit test. The result will likely be both overinclusive and underinclusive. It would be overinclusive to require all firms in the regulated class to broadly open their services and data to all interested parties, even where it wouldn’t make sense for privacy, security, or other efficiency reasons. It is underinclusive in that the broad mandate will necessarily sap regulated firms’ resources and deter them from looking for new innovative uses that might make sense, but that are outside of the broad mandate. Thus, the likely result is less security and privacy, more expense, and less innovation.