Archives For Data Privacy & Security

The concept of European “digital sovereignty” has been promoted in recent years both by high officials of the European Union and by EU national governments. Indeed, France made strengthening sovereignty one of the goals of its recent presidency in the EU Council.

The approach taken thus far both by the EU and by national authorities has been not to exclude foreign businesses, but instead to focus on research and development funding for European projects. Unfortunately, there are worrying signs that this more measured approach is beginning to be replaced by ill-conceived moves toward economic protectionism, ostensibly justified by national-security and personal-privacy concerns.

In this context, it is worth reconsidering why Europeans’ best interests are best served not by economic isolationism, but by an understanding of sovereignty that capitalizes on alliances with other free democracies.

Protectionism Under the Guise of Cybersecurity

Among the primary worrying signs regarding the EU’s approach to digital sovereignty is the union’s planned official cybersecurity-certification scheme. The European Commission is reportedly pushing for “digital sovereignty” conditions in the scheme, which would include data and corporate-entity localization and ownership requirements. This can be categorized as “hard” data localization in the taxonomy laid out by Peter Swire and DeBrae Kennedy-Mayo of Georgia Institute of Technology, in that it would prohibit both data transfers to other countries and for foreign capital to be involved in processing even data that is not transferred.

The European Cybersecurity Certification Scheme for Cloud Services (EUCS) is being prepared by ENISA, the EU cybersecurity agency. The scheme is supposed to be voluntary at first, but it is expected that it will become mandatory in the future, at least for some situations (e.g., public procurement). It was not initially billed as an industrial-policy measure and was instead meant to focus on technical security issues. Moreover, ENISA reportedly did not see the need to include such “digital sovereignty” requirements in the certification scheme, perhaps because they saw them as insufficiently grounded in genuine cybersecurity needs.

Despite ENISA’s position, the European Commission asked the agency to include the digital–sovereignty requirements. This move has been supported by a coalition of European businesses that hope to benefit from the protectionist nature of the scheme. Somewhat ironically, their official statement called on the European Commission to “not give in to the pressure of the ones who tend to promote their own economic interests,”

The governments of Denmark, Estonia, Greece, Ireland, Netherlands, Poland, and Sweden expressed “strong concerns” about the Commission’s move. In contrast, Germany called for a political discussion of the certification scheme that would take into account “the economic policy perspective.” In other words, German officials want the EU to consider using the cybersecurity-certification scheme to achieve protectionist goals.

Cybersecurity certification is not the only avenue by which Brussels appears to be pursuing protectionist policies under the guise of cybersecurity concerns. As highlighted in a recent report from the Information Technology & Innovation Foundation, the European Commission and other EU bodies have also been downgrading or excluding U.S.-owned firms from technical standard-setting processes.

Do Security and Privacy Require Protectionism?

As others have discussed at length (in addition to Swire and Kennedy-Mayo, also Theodore Christakis) the evidence for cybersecurity and national-security arguments for hard data localization have been, at best, inconclusive. Press reports suggest that ENISA reached a similar conclusion. There may be security reasons to insist upon certain ways of distributing data storage (e.g., across different data centers), but those reasons are not directly related to the division of national borders.

In fact, as illustrated by the well-known architectural goal behind the design of the U.S. military computer network that was the precursor to the Internet, security is enhanced by redundant distribution of data and network connections in a geographically dispersed way. The perils of putting “all one’s data eggs” in one basket (one locale, one data center) were amply illustrated when a fire in a data center of a French cloud provider, OVH, famously brought down millions of websites that were only hosted there. (Notably, OVH is among the most vocal European proponents of hard data localization).

Moreover, security concerns are clearly not nearly as serious when data is processed by our allies as it when processed by entities associated with less friendly powers. Whatever concerns there may be about U.S. intelligence collection, it would be detached from reality to suggest that the United States poses a national-security risk to EU countries. This has become even clearer since the beginning of the Russian invasion of Ukraine. Indeed, the strength of the U.S.-EU security relationship has been repeatedly acknowledged by EU and national officials.

Another commonly used justification for data localization is that it is required to protect Europeans’ privacy. The radical version of this position, seemingly increasingly popular among EU data-protection authorities, amounts to a call to block data flows between the EU and the United States. (Most bizarrely, Russia seems to receive a more favorable treatment from some European bureaucrats). The legal argument behind this view is that the United States doesn’t have sufficient legal safeguards when its officials process the data of foreigners.

The soundness of that view is debated, but what is perhaps more interesting is that similar privacy concerns have also been identified by EU courts with respect to several EU countries. The reaction of those European countries was either to ignore the courts, or to be “ruthless in exploiting loopholes” in court rulings. It is thus difficult to treat seriously the claims that Europeans’ data is much better safeguarded in their home countries than if it flows in the networks of the EU’s democratic allies, like the United States.

Digital Sovereignty as Industrial Policy

Given the above, the privacy and security arguments are unlikely to be the real decisive factors behind the EU’s push for a more protectionist approach to digital sovereignty, as in the case of cybersecurity certification. In her 2020 State of the Union speech, EU Commission President Ursula von der Leyen stated that Europe “must now lead the way on digital—or it will have to follow the way of others, who are setting these standards for us.”

She continued: “On personalized data—business to consumer—Europe has been too slow and is now dependent on others. This cannot happen with industrial data.” This framing suggests an industrial-policy aim behind the digital-sovereignty agenda. But even in considering Europe’s best interests through the lens of industrial policy, there are reasons to question the manner in which “leading the way on digital” is being implemented.

Limitations on foreign investment in European tech businesses come with significant costs to the European tech ecosystem. Those costs are particularly high in the case of blocking or disincentivizing American investment.

Effect on startups

Early-stage investors such as venture capitalists bring more than just financial capital. They offer expertise and other vital tools to help the businesses in which they invest. It is thus not surprising that, among the best investors, those with significant experience in a given area are well-represented. Due to the successes of the U.S. tech industry, American investors are especially well-positioned to play this role.

In contrast, European investors may lack the needed knowledge and skills. For example, in its report on building “deep tech” companies in Europe, Boston Consulting Group noted that a “substantial majority of executives at deep-tech companies and more than three-quarters of the investors we surveyed believe that European investors do not have a good understanding of what deep tech is.”

More to the point, even where EU players do hold advantages, a cooperative economic and technological system will allow the comparative advantage of both U.S. and EU markets to redound to each others’ benefit. That is to say, of course not all U.S. investment expertise will apply in the EU, but certainly some will. Similarly, there will be EU firms that are positioned to share their expertise in the United States. But there is no ex ante way to know when and where these complementarities will exist, which essentially dooms efforts at centrally planning technological cooperation.

Given the close economic, cultural, and historical ties of the two regions, it makes sense to work together, particularly given the rising international-relations tensions outside of the western sphere. It also makes sense, insofar as the relatively open private-capital-investment environment in the United States is nearly impossible to match, let alone surpass, through government spending.

For example, national government and EU funding in Europe has thus far ranged from expensive failures (the “Google-killer”) to the all-too-predictable bureaucracy-heavy grantmaking, the beneficiaries of which describe as lacking flexibility, “slow,” “heavily process-oriented,” and expensive for businesses to navigate. As reported by the Financial Times’ Sifted website, the EU’s own startup-investment scheme (the European Innovation Council) backed only one business over more than a year, and it had “delays in payment” that “left many startups short of cash—and some on the brink of going out of business.”

Starting new business ventures is risky, especially for the founders. They risk devoting their time, resources, and reputation to an enterprise that may very well fail. Given this risk of failure, the potential upside needs to be sufficiently high to incentivize founders and early employees to take the gamble. This upside is normally provided by the possibility of selling one’s shares in a business. In BCG’s previously cited report on deep tech in Europe, respondents noted that the European ecosystem lacks “clear exit opportunities”:

Some investors fear being constrained by European sovereignty concerns through vetoes at the state or Europe level or by rules potentially requiring European ownership for deep-tech companies pursuing strategically important technologies. M&A in Europe does not serve as the active off-ramp it provides in the US. From a macroeconomic standpoint, in the current environment, investment and exit valuations may be impaired by inflation or geopolitical tensions.

More broadly, those exit opportunities also factor importantly into funders’ appetite to price the risk of failure in their ventures. Where the upside is sufficiently large, an investor might be willing to experiment in riskier ventures and be suitably motivated to structure investments to deal with such risks. But where the exit opportunities are diminished, it makes much more sense to spend time on safer bets that may provide lower returns, but are less likely to fail. Coupled with the fact that government funding must run through bureaucratic channels, which are inherently risk averse, the overall effect is a less dynamic funding system.

The Central and Eastern Europe (CEE) region is an especially good example of the positive influence of American investment in Europe’s tech ecosystem. According to the state-owned Polish Development Fund and Dealroom.co, in 2019, $0.9 billion of venture-capital investment in CEE came from the United States, $0.5 billion from Europe, and $0.1 billion from the rest of the world.

Direct investment

Technological investment is rarely, if ever, a zero-sum game. U.S. firms that invest in the EU (and vice versa) do not do so as foreign conquerors, but as partners whose own fortunes are intertwined with their host country. Consider, for example, Google’s recent PLN 2.7 billion investment in Poland. Far from extractive, that investment will build infrastructure in Poland, and will employ an additional 2,500 Poles in the company’s cloud-computing division. This sort of partnership plants the seeds that grow into a native tech ecosystem. The Poles that today work in Google’s cloud-computing division are the founders of tomorrow’s innovative startups rooted in Poland.

The funding that accompanies native operations of foreign firms also has a direct impact on local economies and tech ecosystems. More local investment in technology creates demand for education and support roles around that investment. This creates a virtuous circle that ultimately facilitates growth in the local ecosystem. And while this direct investment is important for large countries, in smaller countries, it can be a critical component in stimulating their own participation in the innovation economy. 

According to Crunchbase, out of 2,617 EU-headquartered startups founded since 2010 with total equity funding amount of at least $10 million, 927 (35%) had at least one founder who previously worked for an American company. For example, two of the three founders of Madrid-based Seedtag (total funding of more than $300 million) worked at Google immediately before starting Seedtag.

It is more difficult to quantify how many early employees of European startups built their experience in American-owned companies, but it is likely to be significant and to become even more so, especially in regions—like Central and Eastern Europe—with significant direct U.S. investment in local talent.

Conclusion

Explicit industrial policy for protectionist ends is—at least, for the time being—regarded as unwise public policy. But this is not to say that countries do not have valid national interests that can be met through more productive channels. While strong data-localization requirements is ultimately counterproductive, particularly among closely allied nations, countries have a legitimate interest in promoting the growth of the technology sector within their borders.

National investment in R&D can yield fruit, particularly when that investment works in tandem with the private sector (see, e.g., the Bayh-Dole Act in the United States). The bottom line, however, is that any intervention should take care to actually promote the ends it seeks. Strong data-localization policies in the EU will not lead to success of the local tech industry, but it will serve to wall the region off from the kind of investment that can make it thrive.

[This post from Jonathan M. Barnett, the Torrey H. Webb Professor of Law at the University of Southern California’s Gould School of Law, is an entry in Truth on the Market’s FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

In its Advance Notice for Proposed Rulemaking (ANPR) on Commercial Surveillance and Data Security, the Federal Trade Commission (FTC) has requested public comment on an unprecedented initiative to promulgate and implement wide-ranging rules concerning the gathering and use of consumer data in digital markets. In this contribution, I will assume, for the sake of argument, that the commission has the legal authority to exercise its purported rulemaking powers for this purpose without a specific legislative mandate (a question as to which I recognize there is great uncertainty, which is further heightened by the fact that Congress is concurrently considered legislation in the same policy area).

In considering whether to use these powers for the purposes of adopting and implementing privacy-related regulations in digital markets, the commission would be required to undertake a rigorous assessment of the expected costs and benefits of any such regulation. Any such cost-benefit analysis must comprise at least two critical elements that are omitted from, or addressed in highly incomplete form in, the ANPR.

The Hippocratic Oath of Regulatory Intervention

There is a longstanding consensus that regulatory intervention is warranted only if a market failure can be identified with reasonable confidence. This principle is especially relevant in the case of the FTC, which is entrusted with preserving competitive markets and, therefore, should be hesitant about intervening in market transactions without a compelling evidentiary basis. As a corollary to this proposition, it is also widely agreed that implementing any intervention to correct a market failure would only be warranted to the extent that such intervention would be reasonably expected to correct any such failure at a net social gain.

This prudent approach tracks the “economic effect” analysis that the commission must apply in the rulemaking process contemplated under the Federal Trade Commission Act and the analysis of “projected benefits and … adverse economic effects” of proposed and final rules contemplated by the commission’s rules of practice. Consistent with these requirements, the commission has exhibited a longstanding commitment to thorough cost-benefit analysis. As observed by former Commissioner Julie Brill in 2016, “the FTC conducts its rulemakings with the same level of attention to costs and benefits that is required of other agencies.” Former Commissioner Brill also observed that the “FTC combines our broad mandate to protect consumers with a rigorous, empirical approach to enforcement matters.”

This demanding, fact-based protocol enhances the likelihood that regulatory interventions result in a net improvement relative to the status quo, an uncontroversial goal of any rational public policy. Unfortunately, the ANPR does not make clear that the commission remains committed to this methodology.

Assessing Market Failure in the Use of Consumer Data

To even “get off the ground,” any proposed privacy regulation would be required to identify a market failure arising from a particular use of consumer data. This requires a rigorous and comprehensive assessment of the full range of social costs and benefits that can be reasonably attributed to any such practice.

The ANPR’s Oversights

In contrast to the approach described by former Commissioner Brill, several elements of the ANPR raise significant doubts concerning the current commission’s willingness to assess evidence relevant to the potential necessity of privacy-related regulations in a balanced, rigorous, and comprehensive manner.

First, while the ANPR identifies a plethora of social harms attributable to data-collection practices, it merely acknowledges the possibility that consumers enjoy benefits from such practices “in theory.” This skewed perspective is not empirically serious. Focusing almost entirely on the costs of data collection and dismissing as conjecture any possible gains defies market realities, especially given the fact that (as discussed below) those gains are clearly significant and, in some cases, transformative.

Second, the ANPR’s choice of the normatively charged term “data surveillance” to encompass all uses of consumer data conveys the impression that all data collection through digital services is surreptitious or coerced, whereas (as discussed below) some users may knowingly provide such data to enable certain data-reliant functionalities.

Third, there is no mention in the ANPR that online providers widely provide users with notices concerning certain uses of consumer data and often require users to select among different levels of data collection.

Fourth, the ANPR unusually relies substantially on news websites and non-peer-reviewed publications in the style of policy briefs or advocacy papers, rather than the empirical social-science research on which the commission has historically made policy determinations.

This apparent indifference to analytical balance is particularly exhibited in the ANPR’s failure to address the economic gains generated through the use of consumer data in online markets. As was recognized in a 2014 White House report, many valuable digital services could not function effectively without engaging in some significant level of data collection. The examples are numerous and diverse, including traffic-navigation services that rely on data concerning a user’s geographic location (as well as other users’ geographic location); personalized ad delivery, which relies on data concerning a user’s search history and other disclosed characteristics; and search services, which rely on the ability to use user data to offer search services at no charge while offering targeted advertisements to paying advertisers.

There are equally clear gains on the “supply” side of the market. Data-collection practices can expand market access by enabling smaller vendors to leverage digital intermediaries to attract consumers that are most likely to purchase those vendors’ goods or services. The commission has recognized this point in the past, observing in a 2014 report:

Data brokers provide the information they compile to clients, who can use it to benefit consumers … [C]onsumers may benefit from increased and innovative product offerings fueled by increased competition from small businesses that are able to connect with consumers that they may not have otherwise been able to reach.

Given the commission’s statutory mission under the FTC Act to protect consumers’ interests and preserve competitive markets, these observations should be of special relevance.

Data Protection v. Data-Reliant Functionality

Data-reliant services yield social gains by substantially lowering transaction costs and, in the process, enabling services that would not otherwise be feasible, with favorable effects for consumers and vendors. This observation does not exclude the possibility that specific uses of consumer data may constitute a potential market failure that merits regulatory scrutiny and possible intervention (assuming there is sufficient legal authority for the relevant agency to undertake any such intervention). That depends on whether the social costs reasonably attributable to a particular use of consumer data exceed the social gains reasonably attributable to that use. This basic principle seems to be recognized by the ANPR, which states that the commission can only deem a practice “unfair” under the FTC Act if “it causes or is likely to cause substantial injury” and “the injury is not outweighed by benefits to consumers or competition.”

In implementing this principle, it is important to keep in mind that a market failure could only arise if the costs attributable to any particular use of consumer data are not internalized by the parties to the relevant transaction. This requires showing either that a particular use of consumer data imposes harms on third parties (a plausible scenario in circumstances implicating risks to data security) or consumers are not aware of, or do not adequately assess or foresee, the costs they incur as a result of such use (a plausible scenario in circumstances implicating risks to consumer data). For the sake of brevity, I will focus on the latter scenario.

Many scholars have taken the view that consumers do not meaningfully read privacy notices or consider privacy risks, although the academic literature has also recognized efforts by private entities to develop notice methodologies that can improve consumers’ ability to do so. Even accepting this view, however, it does not necessarily follow (as the ANPR appears to assume) that a more thorough assessment of privacy risks would inevitably lead consumers to elect higher levels of data privacy even where that would degrade functionality or require paying a positive price for certain services. That is a tradeoff that will vary across consumers. It is therefore difficult to predict and easy to get wrong.

As the ANPR indirectly acknowledges in questions 26 and 40, interventions that bar certain uses of consumer data may therefore harm consumers by compelling the modification, positive pricing, or removal from the market of popular data-reliant services. For this reason, some scholars and commentators have favored the informed-consent approach that provides users with the option to bar or limit certain uses of their data. This approach minimizes error costs since it avoids overestimating consumer preferences for privacy. Unlike a flat prohibition of certain uses of consumer data, it also can reflect differences in those preferences across consumers. The ANPR appears to dismiss this concern, asking in question 75 whether certain practices should be made illegal “irrespective of whether consumers consent to them” (my emphasis added).

Addressing the still-uncertain body of evidence concerning the tradeoff between privacy protections on the one hand and data-reliant functionalities on the other (as well as the still-unresolved extent to which users can meaningfully make that tradeoff) lies outside the scope of this discussion. However, the critical observation is that any determination of market failure concerning any particular use of consumer data must identify the costs (and specifically, identify non-internalized costs) attributable to any such use and then offset those costs against the gains attributable to that use.

This balancing analysis is critical. As the commission recognized in a 2015 report, it is essential to strike a balance between safeguarding consumer privacy without suppressing the economic gains that arise from data-reliant services that can benefit consumers and vendors alike. This even-handed approach is largely absent from the ANPR—which, as noted above, focuses almost entirely on costs while largely overlooking the gains associated with the uses of consumer data in online markets. This suggests a one-sided approach to privacy regulation that is incompatible with the cost-benefit analysis that the commission recognizes it must follow in the rulemaking process.

Private-Ordering Approaches to Consumer-Data Regulation

Suppose that a rigorous and balanced cost-benefit analysis determines that a particular use of consumer data would likely yield social costs that exceed social gains. It would still remain to be determined whether and howa regulator should intervene to yield a net social gain. As regulators make this determination, it is critical that they consider the full range of possible mechanisms to address a particular market failure in the use of consumer data.

Consistent with this approach, the FTC Act specifically requires that the commission specify in an ANPR “possible regulatory alternatives under consideration,” a requirement that is replicated at each subsequent stage of the rulemaking process, as provided in the rules of practice. The range of alternatives should include the possibility of taking no action, if no feasible intervention can be identified that would likely yield a net gain.

In selecting among those alternatives, it is imperative that the commission consider the possibility of unnecessary or overly burdensome rules that could impede the efficient development and supply of data-reliant services, either degrading the quality or raising the price of those services. In the past, the commission has emphasized this concern, stating in 2011 that “[t]he FTC actively looks for means to reduce burdens while preserving the effectiveness of a rule.”

This consideration (which appears to be acknowledged in question 24 of the ANPR) is of special importance to privacy-related regulation, given that the estimated annual costs to the U.S. economy (as calculated by the Information Technology and Innovation Foundation) of compliance with the most extensive proposed forms of privacy-related regulations would exceed $100 billion dollars. Those costs would be especially burdensome for smaller entities, effectively raising entry barriers and reducing competition in online markets (a concern that appears to be acknowledged in question 27 of the ANPR).

Given the exceptional breadth of the rules that the ANPR appears to contemplate—cover an ambitious range of activities that would typically be the subject of a landmark piece of federal legislation, rather than administrative rulemaking—it is not clear that the commission has seriously considered this vital point of concern.

In the event that the FTC does move forward with any of these proposed rulemakings (which would be required to rest on a factually supported finding of market failure), it would confront a range of possible interventions in markets for consumer data. That range is typically viewed as being bounded, on the least-interventionist side, by notice and consent requirements to facilitate informed user choice, and on the most interventionist side, by prohibitions that specifically bar certain uses of consumer data.

This is well-traveled ground within the academic and policy literature and the relative advantages and disadvantages of each regulatory approach are well-known (and differ depending on the type of consumer data and other factors). Within the scope of this contribution, I wish to address an alternative regulatory approach that lies outside this conventional range of policy options.

Bottom-Up v. Top-Down Regulation

Any cost-benefit analysis concerning potential interventions to modify or bar a particular use of consumer data, or to mandate notice-and-consent requirements in connection with any such use, must contemplate not only government-implemented solutions but also market-implemented solutions, including hybrid mechanisms in which government action facilitates or complements market-implemented solutions.

This is not a merely theoretical proposal (and is referenced indirectly in questions 36, 51, and 87 of the ANPR). As I have discussed in previously published research, the U.S. economy has a long-established record of having adopted, largely without government intervention, collective solutions to the information asymmetries that can threaten the efficient operation of consumer goods and services markets.

Examples abound: Underwriters Laboratories (UL), which establishes product-safety standards in hundreds of markets; large accounting firms, which confirm compliance with Generally Accepted Accounting Principles (GAAP), which are in turn established and updated by the Financial Accounting Standards Board, a private entity subject to oversight by the Securities and Exchange Commission; and intermediaries in other markets, such as consumer credit, business credit, insurance carriers, bond issuers, and content ratings in the entertainment and gaming industries. Collectively, these markets encompass thousands of providers, hundreds of millions of customers, and billions of dollars in value.

A collective solution is often necessary to resolve information asymmetries efficiently because the benefits from establishing an industrywide standard of product or service quality, together with a trusted mechanism for showing compliance with that standard, generates gains that cannot be fully internalized by any single provider.

Jurisdictions outside the United States have tended to address this collective-action problem through the top-down imposition of standards by government mandate and enforcement by regulatory agencies, as illustrated by the jurisdictions referenced by the ANPR that have imposed restrictions on the use of consumer data through direct regulatory intervention. By contrast, the U.S. economy has tended to favor the bottom-up development of voluntary standards, accompanied by certification and audit services, all accomplished by a mix of industry groups and third-party intermediaries. In certain markets, this may be a preferred model to address the information asymmetries between vendors and customers that are the key sources of potential market failure in the use of consumer data.

Privately organized initiatives to set quality standards and monitor compliance benefit the market by supplying a reliable standard that reduces information asymmetries and transaction costs between consumers and vendors. This, in turn, yields economic gains in the form of increased output, since consumers have reduced uncertainty concerning product quality. These quality standards are generally implemented through certification marks (for example, the “UL” certification mark) or ranking mechanisms (for example, consumer-credit or business-credit scores), which induce adoption and compliance through the opportunity to accrue reputational goodwill that, in turn, translates into economic gains.

These market-implemented voluntary mechanisms are a far less costly means to reduce information asymmetries in consumer-goods markets than regulatory interventions, which require significant investments of public funds in rulemaking, detection, investigation, enforcement, and adjudication activities.

Hybrid Policy Approaches

Private-ordering solutions to collective-action failures in markets that suffer from information asymmetries can sometimes benefit from targeted regulatory action, resulting in a hybrid policy approach. In particular, regulators can sometimes play two supplemental functions in this context.

First, regulators can require that providers in certain markets comply with (or can provide a liability safe harbor for providers that comply with) the quality standards developed by private intermediaries that have developed track records of efficiently establishing those standards and reliably confirming compliance. This mechanism is anticipated by the ANPR, which asks in question 51 whether the commission should “require firms to certify that their commercial surveillance practices meet clear standards concerning collection, use, retention, transfer, or monetization of consumer data” and further asks whether those standards should be set by “the Commission, a third-party organization, or some other entity.”

Other regulatory agencies already follow this model. For example, federal and state regulatory agencies in the fields of health care and education rely on accreditation by designated private entities for purposes of assessing compliance with applicable licensing requirements.

Second, regulators can supervise and review the quality standards implemented, adjusted, and enforced by private intermediaries. This is illustrated by the example of securities markets, in which the major exchanges institute and enforce certain governance, disclosure, and reporting requirements for listed companies but are subject to regulatory oversight by the SEC, which must approve all exchange rules and amendments. Similarly, major accounting firms monitor compliance by public companies with GAAP but must register with, and are subject to oversight by, the Public Company Accounting Oversight Board (PCAOB), a nonprofit entity subject to SEC oversight.

These types of hybrid mechanisms shift to private intermediaries most of the costs involved in developing, updating, and enforcing quality standards (in this context, standards for the use of consumer data) and harness private intermediaries’ expertise, capacities, and incentives to execute these functions efficiently and rapidly, while using targeted forms of regulatory oversight as a complementary policy tool.

Conclusion

Certain uses of consumer data in digital markets may impose net social harms that can be mitigated through appropriately crafted regulation. Assuming, for the sake of argument, that the commission has the legal power to enact regulation to address such harms (again, a point as to which there is great doubt), any specific steps must be grounded in rigorous and balanced cost-benefit analysis.

As a matter of law and sound public policy, it is imperative that the commission meaningfully consider the full range of reliable evidence to identify any potential market failures in the use of consumer data and how to formulate rules to rectify or mitigate such failures at a net social gain. Given the extent to which business models in digital environments rely on the use of consumer data, and the substantial value those business models confer on consumers and businesses, the potential “error costs” of regulatory overreach are high. It is therefore critical to engage in a thorough balancing of costs and gains concerning any such use.

Privacy regulation is a complex and economically consequential policy area that demands careful diagnosis and targeted remedies grounded in analysis and evidence, rather than sweeping interventions accompanied by rhetoric and anecdote.

[This post is an entry in Truth on the Market’s FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The Federal Trade Commission’s (FTC) Aug. 22 Advance Notice of Proposed Rulemaking on Commercial Surveillance and Data Security (ANPRM) is breathtaking in its scope. For an overview summary, see this Aug. 11 FTC press release.

In their dissenting statements opposing ANPRM’s release, Commissioners Noah Phillips and Christine Wilson expertly lay bare the notice’s serious deficiencies. Phillips’ dissent stresses that the ANPRM illegitimately arrogates to the FTC legislative power that properly belongs to Congress:

[The [A]NPRM] recast[s] the Commission as a legislature, with virtually limitless rulemaking authority where personal data are concerned. It contemplates banning or regulating conduct the Commission has never once identified as unfair or deceptive. At the same time, the ANPR virtually ignores the privacy and security concerns that have animated our [FTC] enforcement regime for decades. … [As such, the ANPRM] is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate. That’s not “democratizing” the FTC or using all “the tools in the FTC’s toolbox.” It’s a naked power grab.

Wilson’s complementary dissent critically notes that the 2021 changes to FTC rules of practice governing consumer-protection rulemaking decrease opportunities for public input and vest significant authority solely with the FTC chair. She also echoed Phillips’ overarching concern with FTC overreach (footnote citations omitted):

Many practices discussed in this ANPRM are presented as clearly deceptive or unfair despite the fact that they stretch far beyond practices with which we are familiar, given our extensive law enforcement experience. Indeed, the ANPRM wanders far afield of areas for which we have clear evidence of a widespread pattern of unfair or deceptive practices. … [R]egulatory and enforcement overreach increasingly has drawn sharp criticism from courts. Recent Supreme Court decisions indicate FTC rulemaking overreach likely will not fare well when subjected to judicial review.

Phillips and Wilson’s warnings are fully warranted. The ANPRM contemplates a possible Magnuson-Moss rulemaking pursuant to Section 18 of the FTC Act,[1] which authorizes the commission to promulgate rules dealing with “unfair or deceptive acts or practices.” The questions that the ANPRM highlights center primarily on concerns of unfairness.[2] Any unfairness-related rulemaking provisions eventually adopted by the commission will have to satisfy a strict statutory cost-benefit test that defines “unfair” acts, found in Section 5(n) of the FTC Act. As explained below, the FTC will be hard-pressed to justify addressing most of the ANPRM’s concerns in Section 5(n) cost-benefit terms.

Discussion

The requirements imposed by Section 5(n) cost-benefit analysis

Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority … to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In other words, a practice may be condemned as unfair only if it causes or is likely to cause “(1) substantial injury to consumers (2) which is not reasonably avoidable by consumers themselves and (3) not outweighed by countervailing benefits to consumers or to competition.”

This is a demanding standard. (For scholarly analyses of the standard’s legal and economic implications authored by former top FTC officials, see here, here, and here.)

First, the FTC must demonstrate that a practice imposes a great deal of harm on consumers, which they could not readily have avoided. This requires detailed analysis of the actual effects of a particular practice, not mere theoretical musings about possible harms that may (or may not) flow from such practice. Actual effects analysis, of course, must be based on empiricism: consideration of hard facts.

Second, assuming that this formidable hurdle is overcome, the FTC must then acknowledge and weigh countervailing welfare benefits that might flow from such a practice. In addition to direct consumer-welfare benefits, other benefits include “benefits to competition.” Those may include business efficiencies that reduce a firm’s costs, because such efficiencies are a driver of vigorous competition and, thus, of long-term consumer welfare. As the Organisation for Economic Co-operation and Development has explained (see OECD Background Note on Efficiencies, 2012, at 14), dynamic and transactional business efficiencies are particularly important in driving welfare enhancement.

In sum, under Section 5(n), the FTC must show actual, fact-based, substantial harm to consumers that they could not have escaped, acting reasonably. The commission must also demonstrate that such harm is not outweighed by consumer and (procompetitive) business-efficiency benefits. What’s more, Section 5(n) makes clear that the FTC cannot “pull a rabbit out of a hat” and interject other “public policy” considerations as key factors in the rulemaking  calculus (“[s]uch [other] public policy considerations may not serve as a primary basis for … [a] determination [of unfairness]”).

It ineluctably follows as a matter of law that a Section 18 FTC rulemaking sounding in unfairness must be based on hard empirical cost-benefit assessments, which require data grubbing and detailed evidence-based economic analysis. Mere anecdotal stories of theoretical harm to some consumers that is alleged to have resulted from a practice in certain instances will not suffice.

As such, if an unfairness-based FTC rulemaking fails to adhere to the cost-benefit framework of Section 5(n), it inevitably will be struck down by the courts as beyond the FTC’s statutory authority. This conclusion is buttressed by the tenor of the Supreme Court’s unanimous 2021 opinion in AMG Capital v. FTC, which rejected the FTC’s claim that its statutory injunctive authority included the ability to obtain monetary relief for harmed consumers (see my discussion of this case here).

The ANPRM and Section 5(n)

Regrettably, the tone of the questions posed in the ANPRM indicates a lack of consideration for the constraints imposed by Section 5(n). Accordingly, any future rulemaking that sought to establish “remedies” for many of the theorized abuses found in the ANPRM would stand very little chance of being upheld in litigation.

The Aug. 11 FTC press release cited previously addresses several broad topical sources of harms: harms to consumers; harms to children; regulations; automated systems; discrimination; consumer consent; notice, transparency, and disclosure; remedies; and obsolescence. These categories are chock full of questions that imply the FTC may consider restrictions on business conduct that go far beyond the scope of the commission’s authority under Section 5(n). (The questions are notably silent about the potential consumer benefits and procompetitive efficiencies that may arise from the business practices called here into question.)

A few of the many questions set forth under just four of these topical listings (harms to consumers, harms to children, regulations, and discrimination) are highlighted below, to provide a flavor of the statutory overreach that categorizes all aspects of the ANPRM. Many other examples could be cited. (Phillips’ dissenting statement provides a cogent and critical evaluation of ANPRM questions that embody such overreach.) Furthermore, although there is a short discussion of “costs and benefits” in the ANPRM press release, it is wholly inadequate to the task.

Under the category “harms to consumers,” the ANPRM press release focuses on harm from “lax data security or surveillance practices.” It asks whether FTC enforcement has “adequately addressed indirect pecuniary harms, including potential physical harms, psychological harms, reputational injuries, and unwanted intrusions.” The press release suggests that a rule might consider addressing harms to “different kinds of consumers (e.g., young people, workers, franchisees, small businesses, women, victims of stalking or domestic violence, racial minorities, the elderly) in different sectors (e.g., health, finance, employment) or in different segments or ‘stacks’ of the internet economy.”

These laundry lists invite, at best, anecdotal public responses alleging examples of perceived “harm” falling into the specified categories. Little or no light is likely to be shed on the measurement of such harm, nor on the potential beneficial effects to some consumers from the practices complained of (for example, better targeted ads benefiting certain consumers). As such, a sound Section 5(n) assessment would be infeasible.

Under “harms to children,” the press release suggests possibly extending the limitations of the FTC-administered Children’s Online Privacy Protection Act (COPPA) to older teenagers, thereby in effect rewriting COPPA and usurping the role of Congress (a clear statutory overreach). The press release also asks “[s]hould new rules set out clear limits on personalized advertising to children and teenagers irrespective of parental consent?” It is hard (if not impossible) to understand how this form of overreach, which would displace the supervisory rights of parents (thereby imposing impossible-to-measure harms on them), could be shoe-horned into a defensible Section 5(n) cost-benefit assessment.

Under “regulations,” the press release asks whether “new rules [should] require businesses to implement administrative, technical, and physical data security measures, including encryption techniques, to protect against risks to the security, confidentiality, or integrity of covered data?” Such new regulatory strictures (whose benefits to some consumers appear speculative) would interfere significantly in internal business processes. Specifically, they could substantially diminish the efficiency of business-security measures, diminish business incentives to innovate (for example, in encryption), and reduce dynamic competition among businesses.

Consumers also would be harmed by a related slowdown in innovation. Those costs undoubtedly would be high but hard, if not impossible, to measure. The FTC also asks whether a rule should limit “companies’ collection, use, and retention of consumer data.” This requirement, which would seemingly bypass consumers’ decisions to make their data available, would interfere with companies’ ability to use such data to improve business offerings and thereby enhance consumers’ experiences. Justifying new requirements such as these under Section 5(n) would be well-nigh impossible.

The category “discrimination” is especially problematic. In addressing “algorithmic discrimination,” the ANPRM press release asks whether the FTC should “consider new trade regulation rules that bar or somehow limit the deployment of any system that produces discrimination, irrespective of the data or processes on which those outcomes are based.” In addition, the press release asks “if the Commission [should] consider harms to other underserved groups that current law does not recognize as protected from discrimination (e.g., unhoused people or residents of rural communities)?”

The FTC cites no statutory warrant for the authority to combat such forms of “discrimination.” It is not a civil-rights agency. It clearly is not authorized to issue anti-discrimination rules dealing with “groups that current law does not recognize as protected from discrimination.” Any such rules, if issued, would be summarily struck down in no uncertain terms by the judiciary, even without regard to Section 5(n).

In addition, given the fact that “economic discrimination” often is efficient (and procompetitive) and may be beneficial to consumer welfare (see, for example, here), more limited economic anti-discrimination rules almost certainly would not pass muster under the Section 5(n) cost-benefit framework.     

Finally, while the ANPRM press release does contain a very short section entitled “costs and benefits,” that section lacks any specific reference to the required Section 5(n) evaluation framework. Phillips’ dissent points out that the ANPRM:

…simply fail[s] to provide the detail necessary for commenters to prepare constructive responses” on cost-benefit analysis. He stresses that the broad nature of requests for commenters’ view on costs and benefits renders the inquiry “not conducive to stakeholders submitting data and analysis that can be compared and considered in the context of a specific rule. … Without specific questions about [the costs and benefits of] business practices and potential regulations, the Commission cannot hope for tailored responses providing a full picture of particular practices.

In other words, the ANPRM does not provide the guidance needed to prompt the sorts of responses that might assist the FTC in carrying out an adequate Section 5(n) cost-benefit analysis.

Conclusion

The FTC would face almost certain defeat in court if it promulgated a broad rule addressing many of the perceived unfairness-based “ills” alluded to in the ANPRM. Moreover, although its requirements would (I believe) not come into effect, such a rule nevertheless would impose major economic costs on society.

Prior to final judicial resolution of its status, the rule would disincentivize businesses from engaging in a variety of data-related practices that enhance business efficiency and benefit many consumers. Furthermore, the FTC resources devoted to developing and defending the rule would not be applied to alternative welfare-enhancing FTC activities—a substantial opportunity cost.

The FTC should take heed of these realities and opt not to carry out a rulemaking based on the ANPRM. It should instead devote its scarce consumer protection resources to prosecuting hard core consumer fraud and deception—and, perhaps, to launching empirical studies into the economic-welfare effects of data security and commercial surveillance practices. Such studies, if carried out, should focus on dispassionate economic analysis and avoid policy preconceptions. (For example, studies involving digital platforms should take note of the existing economic literature, such as a paper indicating that digital platforms have generated enormous consumer-welfare benefits not accounted for in gross domestic product.)

One can only hope that a majority of FTC commissioners will apply common sense and realize that far-flung rulemaking exercises lacking in statutory support are bad for the rule of law, bad for the commission’s reputation, bad for the economy, and bad for American consumers.


[1] The FTC states specifically that it “is issuing this ANPR[M] pursuant to Section 18 of the Federal Trade Commission Act”.

[2] Deceptive practices that might be addressed in a Section 18 trade regulation rule would be subject to the “FTC Policy Statement on Deception,” which states that “the Commission will find deception if there is a representation, omission or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.” A court reviewing an FTC Section 18 rule focused on “deceptive acts or practices” undoubtedly would consult this Statement, although it is not clear, in light of recent jurisprudential trends, that the court would defer to the Statement’s analysis in rendering an opinion. In any event, questions of deception, which focus on acts or practices that mislead consumers, would in all likelihood have little relevance to the evaluation of any rule that might be promulgated in light of the ANPRM.    

It’s been a busy summer, and promises to be a busier fall. So the UMC Roundup is on hiatus this week.

But because the news doesn’t stop even when we do, we’re using this week’s Roundup to announce a call for submissions relating to the FTC’s ANPR on Commercial Surveillance and Data Security. Submissions relating to various aspects of the ANPR will be considered for publication as part of our ongoing FTC UMC Symposium. We have already previously offered some discussion of the ANPR on Truth on the Market, here and here.

Posts should substantively engage with the ANPR and will generally be between 1,800-4,000 words. We are interested in all topics and perspectives. Given that this is the UMC symposium, we are particularly interested in submissions that explore the competition aspects of the ANPR, including the mysterious Footnote 47 and the procedural and substantive overlaps between the FTC’s UDAP and UMC authorities that run throughout the ANPR.

Submissions should be sent to Keith Fierro (kfierro@laweconcenter.org). To maximize the likelihood that we will publish your submission, we encourage potential authors to submit a brief explanation of the proposed topic prior to writing. Because selected submissions will be published as part of the ongoing UMC Symposium, we anticipate beginning to publish selected submissions immediately and on a rolling basis. For full consideration, contributions should be submitted prior to Sept. 8, 2022.

The FTC UMC Roundup, part of the Truth on the Market FTC UMC Symposium, is a weekly roundup of news relating to the Federal Trade Commission’s antitrust and Unfair Methods of Competition authority. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.

[TOTM: This guest post from Svetlana S. Gans and Natalie Hausknecht of Gibson Dunn is part of the Truth on the Market FTC UMC Symposium. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.]

The Federal Trade Commission (FTC) launched one of the most ambitious rulemakings in agency history Aug. 11, with its 3-2 vote to initiate Advance Notice of Proposed Rulemaking (ANPRM) on commercial surveillance and data security. The divided vote, which broke down on partisan lines, stands in stark contrast to recent bipartisan efforts on Capitol Hill, particularly on the comprehensive American Data Privacy and Protection Act (ADPPA).  

Although the rulemaking purports to pursue a new “privacy and data security” regime, it targets far more than consumer privacy. The ANPRM lays out a sweeping project to rethink the regulatory landscape governing nearly every facet of the U.S. internet economy, from advertising to anti-discrimination law, and even to labor relations. Any entity that uses the internet (even for internal purposes) is likely to be affected by this latest FTC action, and public participation in the proposed rulemaking will be important to ensure the agency gets it right.

Summary of the ANPRM  

The vague scope of the FTC’s latest ANPRM begins at its title: “Commercial Surveillance and Data Security” Rulemaking. The announcement states the FTC intends to explore rules “cracking down” on the “business of collecting, analyzing, and profiting from information about people.” The ANPRM then defines the scope of “commercial surveillance” to include virtually any data activity. For example, the ANPRM explains that it includes practices used “to set prices, curate newsfeeds, serve advertisements, and conduct research on people’s behavior, among other things.” The ANPRM also goes on to say that it is concerned about practices “outside of the retail consumer setting” that the agency traditionally regulates. Indeed, the ANPRM defines “consumer” to include “businesses and workers, not just individuals who buy or exchange data for retail goods and services.”

Unlike the bipartisan ADPPA, the ANPRM also takes aim at the “consent” model that the FTC has long advocated to ensure consumers make informed choices about their data online. It claims that “consumers may become resigned to” data practices and “have little to no actual control over what happens to their information.” It also suggests that consumers “do not generally understand” data practices, such that their permission could be “meaningful”—making express consumer consent to data practices “irrelevant.”

The ANPRM further lists a disparate set of additional FTC concerns, from “pernicious dark pattern practices” to “lax data security practices” to “sophisticated digital advertising systems” to “stalking apps,” “cyber bullying, cyberstalking, and the distribution of child sexual abuse material,” and the use of “social media” among “kids and teens.” It “finally” wraps up with a reference to “growing reliance on automated systems” that may create “new forms and mechanisms for discrimination” in areas like housing, employment, and healthcare. The issue the agency expresses about these automated systems is with apparent “disparate outcomes” “even when automated systems consider only unprotected consumer traits.”

Having set out these concerns, the ANPRM seeks to justify a new rulemaking via a list of what it describes as “decades” of “consumer data privacy and security” enforcement actions. The rulemaking then requests that the public answer 95 questions, covering many different legal and factual issues. For example, the agency requests the public weigh in on the practices “companies use to surveil consumers,” intangible and unmeasurable “harms” created by such practices, the most harmful practices affecting children and teens, techniques that “manipulate consumers into prolonging online activity,” how the commission should balance costs and benefits from any regulation, biometric data practices, algorithmic errors and disparate impacts, the viability of consumer consent, the opacity of “consumer surveillance practices,” and even potential remedies the agency should consider.  

Commissioner Statements in Support of the ANPR

Every Democratic commissioner issued a separate supporting statement. Chair Lina Khan’s statement justified the rulemaking grounds that the FTC is the “de facto law enforcer in this domain.” She also doubled-down on the decision to address not only consumer privacy, but issues affecting all “opportunities in our economy and society, as well as core civil liberties and civil rights” and described being “especially eager to build a record” related to: the limits of “notice and consent” frameworks, as opposed to withdrawing permission for data collection “in the first place”; how to navigate “information asymmetries” with companies; how to address certain “business models” “premised on” persistent tracking; discrimination in automated processes; and workplace surveillance.   

Commissioner Rebecca Kelly Slaughter’s longer statement more explicitly attacked the agency’s “notice-and-consent regime” as having “failed to protect users.” She expressed hope that the new rules would take on biometric or location tracking, algorithmic decision-making, and lax data security practices as “long overdue.” Commission Slaughter further brushed aside concerns that the rulemaking was inappropriate while Congress considered comprehensive privacy legislation, asserting that the magnitude of the rulemaking was a reason to do it—not shy away. She also expressed interest in data-minimization specifications, discriminatory algorithms, and kids and teens issues.

Commissioner Alvaro Bedoya’s short statement likewise expressed support for acting. However, he noted the public comment period would help the agency “discern whether and how to proceed.” Like his colleagues, he identified his particular interest in “emerging discrimination issues”: the mental health of kids and teens; the protection of non-English speaking communities; and biometric data. On the pending privacy legislation, he noted that:

[ADPPA] is the strongest privacy bill that has ever been this close to passing. I hope it does pass. I hope it passes soon…. This ANPRM will not interfere with that effort. I want to be clear: Should the ADPPA pass, I will not vote for any rule that overlaps with it.

Commissioner Statements Opposed to the ANPRM

Both Republican commissioners published dissents. Commissioner Christine S. Wilson’s urged deference to Congress as it considers a comprehensive privacy law. Yet she also expressed broader concern about the FTC’s recent changes to its Section 18 rulemaking process that “decrease opportunities for public input and vest significant authority for the rulemaking proceedings solely with the Chair” and the unjustified targeting of practices not subject to prior enforcement action. Notably, Commissioner Wilson also worried the rulemaking was unlikely to survive judicial scrutiny, indicating that Chair Khan’s statements give her “no basis to believe that she will seek to ensure that proposed rule provisions fit within the Congressionally circumscribed jurisdiction of the FTC.”  

Commissioner Noah Phillips’ dissent criticized the ANPRM for failing to provide “notice of anything” and thus stripping the public of its participation rights. He argued that the ANPRM’s “myriad” questions appear to be a “mechanism to fish for legal theories that might justify outlandish regulatory ambition outside our jurisdiction.” He further noted that the rulemaking positions the FTC as a legislature to regulate in areas outside of its expertise (e.g., labor law) with potentially disastrous economic costs that it is ill-equipped to understand.

Commissioner Phillips further argued the ANPRM attacks disparate practices based on an “amalgam of cases concerning very different business models and conduct” that cannot show the prevalence of misconduct required for Section 18 rulemaking. He also criticized the FTC for abandoning its own informed-consent model based on paternalistic musings about individuals’ ability to decide for themselves. And finally, he criticized the FTC’s apparent overreach in claiming the mantle of “civil rights enforcer” when it was never given that explicit authority by Congress to declare discrimination or disparate impacts unlawful in this space. 

Implications for Regulated Entities and Others Concerned with Potential Agency Overreach

The sheer breadth of the ANPRM demands the avid attention of potentially regulated entities or those concerned with the FTC’s aggressive rulemaking agenda. The public should seek to meaningfully participate in the rulemaking process to ensure the FTC considers a broad array of viewpoints and has the facts before it necessary to properly define the scope of its own authority and the consequences of any proposed privacy regulation. For example, the FTC may issue a notice of proposed rulemaking defining acts or practices as unfair or deceptive “only where it has reason to believe that the unfair or deceptive acts or practices which are the subject of the proposed rulemaking are prevalent.”(emphasis added).

15 U.S. Code § 57a also states that the FTC may make a determination that unfair or deceptive acts or practices are prevalent only if:  “(A) it has issued cease and desist orders regarding such acts or practices, or (B) any other information available to the Commission indicates a widespread pattern of unfair or deceptive acts or practices.” That means that, under the Magnuson-Moss Section 18 rulemaking that the FTC must use here, the agency must show (1) the prevalence of the practices (2) how they are unfair or deceptive, and (3) the economic effect of the rule, including on small businesses and consumers. Any final regulatory analysis also must assess the rule’s costs and benefits and why it was chosen over alternatives. On each count, effective advocacy supported by empirical and sound economic analysis by the public may prove dispositive.

The FTC may have a particularly difficult time meeting this burden of proof with many of the innocuous (and currently permitted) practices identified in the ANPRM. For example, modern online commerce like automated decision-making is a part of the engine that has powered a decade of innovation, lowered logistical and opportunity costs, and opened up amazing new possibilities for small businesses seeking to serve local consumers and their communities. Commissioner Wilson makes this point well:

Many practices discussed in this ANPRM are presented as clearly deceptive or unfair despite the fact that they stretch far beyond practices with which we are familiar, given our extensive law enforcement experience. Indeed, the ANPRM wanders far afield of areas for which we have clear evidence of a widespread pattern of unfair or deceptive practices. 

The FTC also may be setting itself on an imminent collision course with the “major questions” doctrine, in particular. On the last day of its term this year, the Supreme Court handed down West Virginia v. Environmental Protection Agency, which applied the “major questions doctrine” to rule that the EPA can’t base its controversial Clean Power Plan on a novel interpretation of a relatively obscure provision of the Clean Air Act. An agency rule of such vast “economic and political significance,” Chief Justice John Roberts wrote, requires “clear congressional authorization.” (See “The FTC Heads for Legal Trouble” by Svetlana Gans and Eugene Scalia.) Parties are likely to argue the same holds true here with regard to the FTC’s potential regulatory extension into areas like anti-discrimination and labor law. If the FTC remains on this aggressive course, any final privacy rulemaking could also be a tempting target for a reinvigorated nondelegation doctrine.  

Some members of Congress also may question the wisdom of the ANPRM venturing into the privacy realm at all right now, a point advanced by several of the commissioners. Shortly after the FTC’s announcement, House Energy and Commerce Committee Chairman Frank Pallone Jr. (D-N.J.) stated:

I appreciate the FTC’s effort to use the tools it has to protect consumers, but Congress has a responsibility to pass comprehensive federal privacy legislation to better equip the agency, and others, to protect consumers to the greatest extent.

Sen. Roger Wicker (R-Miss.), the ranking member on the Senate Commerce Committee and a leading GOP supporter of the bipartisan legislation, likewise said that the FTC’s move helps “underscore the urgency for the House to bring [ADPPA]  to the floor and for the Senate Commerce Committee to advance it through committee.”  

The FTC’s ANPRM will likely have broad implications for the U.S. economy. Stakeholders can participate in the rulemaking in several ways, including registering by Aug. 31 to speak at the FTC’s Sept. 8 public forum. Stakeholders should also consider submitting public comments and empirical evidence within 60-days of the ANPRM’s publication in the Federal Register, and insist that the FTC hold informal hearings as required under the Magnuson-Moss Act.

While the FTC is rightfully the nation’s top consumer cop, an advanced notice of this scope demands active public awareness and participation to ensure the agency gets it right.  

 

Having earlier passed through subcommittee, the American Data Privacy and Protection Act (ADPPA) has now been cleared for floor consideration by the U.S. House Energy and Commerce Committee. Before the markup, we noted that the ADPPA mimics some of the worst flaws found in the European Union’s General Data Protection Regulation (GDPR), while creating new problems that the GDPR had avoided. Alas, the amended version of the legislation approved by the committee not only failed to correct those flaws, but in some cases it actually undid some of the welcome corrections that had been made to made to the original discussion draft.

Is Targeted Advertising ‘Strictly Necessary’?

The ADPPA’s original discussion draft classified “information identifying an individual’s online activities over time or across third party websites” in the broader category of “sensitive covered data,” for which a consumer’s expression of affirmative consent (“cookie consent”) would be required to collect or process. Perhaps noticing the questionable utility of such a rule, the bill’s sponsors removed “individual’s online activities” from the definition of “sensitive covered data” in the version of ADPPA that was ultimately introduced.

The manager’s amendment from Energy and Commerce Committee Chairman Frank Pallone (D-N.J.) reverted that change and “individual’s online activities” are once again deemed to be “sensitive covered data.” However, the marked-up version of the ADPPA doesn’t require express consent to collect sensitive covered data. In fact, it seems not to consider the possibility of user consent; firms will instead be asked to prove that their collection of sensitive data was a “strict necessity.”

The new rule for sensitive data—in Section 102(2)—is that collecting or processing such data is allowed “where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated” in Section 101(b) (though with exceptions—notably for first-party advertising and targeted advertising).

This raises the question of whether, e.g., the use of targeted advertising based on a user’s online activities is “strictly necessary” to provide or maintain Facebook’s social network? Even if the courts eventually decide, in some cases, that it is necessary, we can expect a good deal of litigation on this point. This litigation risk will impose significant burdens on providers of ad-supported online services. Moreover, it would effectively invite judges to make business decisions, a role for which they are profoundly ill-suited.

Given that the ADPPA includes the “right to opt-out of targeted advertising”—in Section 204(c)) and a special targeted advertising “permissible purpose” in Section 101(b)(17)—this implies that it must be possible for businesses to engage in targeted advertising. And if it is possible, then collecting and processing the information needed for targeted advertising—including information on an “individual’s online activities,” e.g., unique identifiers – Section 2(39)—must be capable of being “strictly necessary to provide or maintain a specific product or service requested by the individual.” (Alternatively, it could have been strictly necessary for one of the other permissible purposes from Section 101(b), but none of them appear to apply to collecting data for the purpose of targeted advertising).

The ADPPA itself thus provides for the possibility of targeted advertising. Therefore, there should be no reason for legal ambiguity about when collecting “individual’s online activities” is “strictly necessary to provide or maintain a specific product or service requested by the individual.” Do we want judges or other government officials to decide which ad-supported services “strictly” require targeted advertising? Choosing business models for private enterprises is hardly an appropriate role for the government. The easiest way out of this conundrum would be simply to revert back to the ill-considered extension of “sensitive covered data” in the ADPPA version that was initially introduced.

Developing New Products and Services

As noted previously, the original ADPPA discussion draft allowed first-party use of personal data to “provide or maintain a specific product or service requested by an individual” (Section 101(a)(1)). What about using the data to develop new products and services? Can a business even request user consent for that? Under the GDPR, that is possible. Under the ADPPA, it may not be.

The general limitation on data use (“provide or maintain a specific product or service requested by an individual”) was retained from the ADPPA original discussion in the version approved by the committee. As originally introduced, the bill included an exception that could have partially addressed the concern in Section 101(b)(2) (emphasis added):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to perform system maintenance or diagnostics, to maintain a product or service for which such data was collected, to conduct internal research or analytics, to improve a product or service for which such data was collected …

Arguably, developing new products and services largely involves “internal research or analytics,” which would be covered under this exception. If the business later wanted to invite users of an old service to use a new service, the business could contact them based on a separate exception for first-party marketing and advertising (Section 101(b)(11) of the introduced bill).

This welcome development was reversed in the manager’s amendment. The new text of the exception (now Section 101(b)(2)(C)) is narrower in a key way (emphasis added): “to conduct internal research or analytics to improve a product or service for which such data was collected.” Hence, it still looks like businesses will find it difficult to use first-party data to develop new products or services.

‘De-Identified Data’ Remains Unclear

Our earlier analysis noted significant confusion in the ADPPA’s concept of “de-identified data.” Neither the introduced version nor the markup amendments addressed those concerns, so it seems worthwhile to repeat and update the criticism here. The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. In the marked-up version, the definition covers: “information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to nonpersonal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” in Section 2(12) of the marked-up version:

  1. To take “reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual”;
  2. To publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and
    1. to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual;”
  3. To “contractually obligate[] any person or entity that receives the information from the covered entity or service provider” to comply with all of the same rules and to include such an obligation “in all subsequent instances for which the data may be received.”

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here, but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification.

Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is to effectively impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Fundamental Issues with Enforcement

One of the most important problems with the ADPPA is its enforcement provisions. Most notably, the private right of action creates pernicious incentives for excessive litigation by providing for both compensatory damages and open-ended injunctive relief. Small businesses have a right to cure before damages can be sought, but many larger firms are not given a similar entitlement. Given such open-ended provisions as whether using web-browsing behavior is “strictly necessary” to improve a product or service, the litigation incentives become obvious. At the very least, there should be a general opportunity to cure, particularly given the broad restrictions placed on essentially all data use.

The bill also creates multiple overlapping power centers for enforcement (as we have previously noted):

The bill carves out numerous categories of state law that would be excluded from pre-emption… as well as several specific state laws that would be explicitly excluded, including Illinois’ Genetic Information Privacy Act and elements of the California Consumer Privacy Act. These broad carve-outs practically ensure that ADPPA will not create a uniform and workable system, and could potentially render the entire pre-emption section a dead letter. As written, it offers the worst of both worlds: a very strict federal baseline that also permits states to experiment with additional data-privacy laws.

Unfortunately, the marked-up version appears to double down on these problems. For example, the bill pre-empts the Federal Communication Commission (FCC) from enforcing sections 222, 338(i), and 631 of the Communications Act, which pertain to privacy and data security. An amendment was offered that would have pre-empted the FCC from enforcing any provisions of the Communications Act (e.g., sections 201 and 202) for data-security and privacy purposes, but it was withdrawn. Keeping two federal regulators on the beat for a single subject area creates an inefficient regime. The FCC should be completely pre-empted from regulating privacy issues for covered entities.

The amended bill also includes an ambiguous provision that appears to serve as a partial carveout for enforcement by the California Privacy Protection Agency (CCPA). Some members of the California delegation—notably, committee members Anna Eshoo and Doris Matsui (both D-Calif.)—have expressed concern that the bill would pre-empt California’s own California Privacy Rights Act. A proposed amendment by Eshoo to clarify that the bill was merely a federal “floor” and that state laws may go beyond ADPPA’s requirements failed in a 48-8 roll call vote. However, the marked-up version of the legislation does explicitly specify that the CPPA “may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act.” How courts might interpret this language should the CPPA seek to enforce provisions of the CCPA that otherwise conflict with the ADPPA is unclear, thus magnifying the problem of compliance with multiple regulators.

Conclusion

As originally conceived, the basic conceptual structure of the ADPPA was, to a very significant extent, both confused and confusing. Not much, if anything, has since improved—especially in the marked-up version that regressed the ADPPA to some of the notably bad features of the original discussion draft. The rules on de-identified data are also very puzzling: their effect contradicts the basic principle of data minimization that the ADPPA purports to uphold. Those examples strongly suggest that the ADPPA is still far from being a properly considered candidate for a comprehensive federal privacy legislation.

European Union lawmakers appear close to finalizing a number of legislative proposals that aim to reform the EU’s financial-regulation framework in response to the rise of cryptocurrencies. Prominent within the package are new anti-money laundering and “countering the financing of terrorism” rules (AML/CFT), including an extension of the so-called “travel rule.” The travel rule, which currently applies to wire transfers managed by global banks, would be extended to require crypto-asset service providers to similarly collect and make available details about the originators and beneficiaries of crypto-asset transfers.

This legislative process proceeded with unusual haste in recent months, which partially explains why legal objections to the proposals have not been adequately addressed. The resulting legislation is fundamentally flawed to such an extent that some of its key features are clearly invalid under EU primary (treaty) law and liable to be struck down by the Court of Justice of the European Union (CJEU). 

In this post, I will offer a brief overview of some of the concerns, which I also discuss in this recent Twitter thread. I focus primarily on the travel rule, which—in the light of EU primary law—constitutes a broad and indiscriminate surveillance regime for personal data. This characterization also applies to most of AML/CFT.

The CJEU, the EU’s highest court, established a number of conditions that such legally mandated invasions of privacy must satisfy in order to be valid under EU primary law (the EU Charter of Fundamental Rights). The legal consequences of invalidity are illustrated well by the Digital Rights Ireland judgment, in which the CJEU struck down an entire piece of EU legislation (the Data Retention Directive). Alternatively, the CJEU could decide to interpret EU law as if it complied with primary law, even if that is contrary to the text.

The Travel Rule in the Transfer of Funds Regulation

The EU travel rule is currently contained in the 2015 Wire Transfer Regulation (WTR). But at the end of June, EU legislators reached a likely final deal on its replacement, the Transfer of Funds Regulation (TFR; see the original proposal from July 2021). I focus here on the TFR, but much of the argument also applies to the older WTR now in force. 

The TFR imposes obligations on payment-system providers and providers of crypto-asset transfers (refer to here, collectively, as “service providers”) to collect, retain, transfer to other service providers, and—in some cases—report to state authorities:

…information on payers and payees, accompanying transfers of funds, in any currency, and the information on originators and beneficiaries, accompanying transfers of crypto-assets, for the purposes of preventing, detecting and investigating money laundering and terrorist financing, where at least one of the payment or crypto-asset service providers involved in the transfer of funds or crypto-assets is established in the Union. (Article 1 TFR)

The TFR’s scope extends to money transfers between bank accounts or other payment accounts, as well as transfers of crypto assets other than peer-to-peer transfers without the involvement of a service provider (Article 2 TFR). Hence, the scope of the TFR includes, but is not limited to, all those who send or receive bank transfers. This constitutes the vast majority of adult EU residents.

The information that service providers are obligated to collect and retain (under Articles 4, 10, 14, and 21 TFR) include data that allow for the identification of both sides of a transfer of funds (the parties’ names, as well as the address, country, official personal document number, customer identification number, or the sender’s date and place of birth) and for linking their identity with the (payment or crypto-asset) account number or crypto-asset wallet address. The TFR also obligates service providers to collect and retain additional data to verify the accuracy of the identifying information “on the basis of documents, data or information obtained from a reliable and independent source” (Articles 4(4), 7(3), 14(5), 16(2) TFR). 

The scope of the obligation to collect and retain verification data is vague and is likely to lead service providers to require their customers to provide copies of passports, national ID documents, bank or payment-account statements, and utility bills, as is the case under the WTR and the 5th AML Directive. Such data is overwhelmingly likely to go beyond information on the civil identity of customers and will often, if not almost always, allow inferring even sensitive personal data about the customer.

The data-collection and retention obligations in the TFR are general and indiscriminate. No distinction is made in TFR’s data-collection and retention provisions based on likelihood of a connection with criminal activity, except for verification data in the case of transfers of funds (an exception not applicable to crypto assets). Even, the distinction in the case of verification data for transfers of funds (“has reasonable grounds for suspecting money laundering or terrorist financing”) arguably lacks the precision required under CJEU case law.

Analogies with the CJEU’s Passenger Name Records Decision

In late June, following its established approach in similar cases, the CJEU gave its judgment in the Ligue des droits humains case, which challenged the EU and Belgian regimes on passenger name records (PNR). The CJEU decided there that the applicable EU law, the PNR Directive, is valid under EU primary law. But it reached that result by interpreting some of the directive’s provisions in ways contrary to their express language and by deciding that some national legal rules implementing the directive are invalid. Some features of the PNR regime that were challenged by the court are strikingly similar to the TFR regime.

First, just like the TFR, the PNR rules imposed a five-year data-retention period for the data of all passengers, even where there is no “objective evidence capable of establishing a risk that relates to terrorist offences or serious crime having an objective link, even if only an indirect one, with those passengers’ air travel.” The court decided that this was a disproportionate restriction of the rights to privacy and to the protection of personal data under Articles 5-7 of the EU Charter of Fundamental Rights. Instead of invalidating the relevant article of the PNR Directive, the CJEU reinterpreted it as if it only allowed for five-year retention in cases where there is evidence of a relevant connection to criminality.

Applying analogous reasoning to the TFR, which imposes an indiscriminate five-year data retention period in its Article 21, the conclusion must be that this TFR provision is invalid under Articles 7-8 of the charter. Article 21 TFR may, at minimum, need to be recast to apply only to that transaction data where there is “objective evidence capable of establishing a risk” that it is connected to serious crime. The court also considered the issue of government access to data that has already been collected. Under the CJEU’s established interpretation of the EU Charter, “it is essential that access to retained data by the competent authorities be subject to a prior review carried out either by a court or by an independent administrative body.” In the PNR regime, at least some countries (such as Belgium) assigned this role to their “passenger information units” (PIUs). The court noted that a PIU is “an authority competent for the prevention, detection, investigation and prosecution of terrorist offences and of serious crime, and that its staff members may be agents seconded from the competent authorities” (e.g. from police or intelligence authorities). But according to the court:

That requirement of independence means that that authority must be a third party in relation to the authority which requests access to the data, in order that the former is able to carry out the review, free from any external influence. In particular, in the criminal field, the requirement of independence entails that the said authority, first, should not be involved in the conduct of the criminal investigation in question and, secondly, must have a neutral stance vis-a-vis the parties to the criminal proceedings …

The CJEU decided that PIUs do not satisfy this requirement of independence and, as such, cannot decide on government access to the retained data.

The TFR (especially its Article 19 on provision of information) does not provide for prior independent review of access to retained data. To the extent that such a review is conducted by Financial Intelligence Units (FIUs) under the AML Directive, concerns arise very similar to the treatment of PIUs under the PNR regime. While Article 32 of the AML Directive requires FIUs to be independent, that doesn’t necessarily mean that they are independent in the ways required of the authority that will decide access to retained data under Articles 7-8 of the EU Charter. For example, the AML Directive does not preclude the possibility of seconding public prosecutors, police, or intelligence officers to FIUs.

It is worth noting that none of the conclusions reached by the CJEU in the PNR case are novel; they are well-grounded in established precedent. 

A General Proportionality Argument

Setting aside specific analogies with previous cases, the TFR clearly has not been accompanied by a more general and fundamental reflection on the proportionality of its basic scheme in the light of the EU Charter. A pressing question is whether the TFR’s far-reaching restrictions of the rights established in Articles 7-8 of the EU Charter (and perhaps other rights, like freedom of expression in Article 11) are strictly necessary and proportionate. 

Arguably, the AML/CFT regime—including the travel rule—are significantly more costly and more rights-restricting than potential alternatives. The basic problem is that there is no reliable data on the relative effectiveness of measures like the travel rule. Defenders of the current AML/CFT regime focus on evidence that it contributes to preventing or prosecuting some crime. But this is not the relevant question when it comes to proportionality. The relevant question is whether those measures are as effective or more effective than alternative, less costly, and more privacy-preserving alternatives. One conservative estimate holds that AML compliance costs in Europe were “120 times the amount successfully recovered from criminals’ and exceeded the estimated total of criminal funds (including funds not seized or identified).” 

The fact that the current AML/CFT regime is a de facto global standard cannot serve as a sufficient justification either, given that EU fundamental law is perfectly comfortable in rejecting non-European law-enforcement practices (see the CJEU’s decision in Schrems). The travel rule has been unquestioningly imported to EU law from U.S. law (via FATF), where the standards of constitutional protection of privacy are much different than under the EU Charter. This fact would likely be noticed by the Court of Justice in any putative challenge to the TFR or other elements of the AML/CFT regime. 

Here, I only flag the possibility of a general proportionality challenge. Much more work needs to be done to flesh it out.

Conclusion

Due to the political and resource constraints of the EU legislative process, it is possible that the legislative proposals in the financial-regulation package did not receive sufficient legal scrutiny from the perspective of their compatibility with the EU Charter of Fundamental Rights. This hypothesis would explain the presence of seemingly clear violations, such as the indiscriminate five-year data-retention period. Given that none of the proposals has, as yet, been voted into law, making the legislators aware of the problem may help to address at least some of the issues.

Legal arguments about the AML/CFT regime’s incompatibility with the EU Charter should be accompanied with concrete alternative proposals to achieve the goals of preventing and combating serious crime that, according to the best evidence, the current AML/CFT regime does ineffectively. We need more regulatory imagination. For example, one part of the solution may be to properly staff and equip government agencies tasked with prosecuting financial crime.

But it’s also possible that the proposals, including the TFR, will be adopted broadly without amendment. In that case, the main recourse available to EU citizens (or to any EU government) will be to challenge the legality of the measures before the Court of Justice.

Just three weeks after a draft version of the legislation was unveiled by congressional negotiators, the American Data Privacy and Protection Act (ADPPA) is heading to its first legislative markup, set for tomorrow morning before the U.S. House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee.

Though the bill’s legislative future remains uncertain, particularly in the U.S. Senate, it would be appropriate to check how the measure compares with, and could potentially interact with, the comprehensive data-privacy regime promulgated by the European Union’s General Data Protection Regulation (GDPR). A preliminary comparison of the two shows that the ADPPA risks adopting some of the GDPR’s flaws, while adding some entirely new problems.

A common misconception about the GDPR is that it imposed a requirement for “cookie consent” pop-ups that mar the experience of European users of the Internet. In fact, this requirement comes from a different and much older piece of EU law, the 2002 ePrivacy Directive. In most circumstances, the GDPR itself does not require express consent for cookies or other common and beneficial mechanisms to keep track of user interactions with a website. Website publishers could likely rely on one of two lawful bases for data processing outlined in Article 6 of the GDPR:

  • data processing is necessary in connection with a contractual relationship with the user, or
  • “processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (unless overridden by interests of the data subject).

For its part, the ADPPA generally adopts the “contractual necessity” basis for data processing but excludes the option to collect or process “information identifying an individual’s online activities over time or across third party websites.” The ADPPA instead classifies such information as “sensitive covered data.” It’s difficult to see what benefit users would derive from having to click that they “consent” to features that are clearly necessary for the most basic functionality, such as remaining logged in to a site or adding items to an online shopping cart. But the expected result will be many, many more popup consent queries, like those that already bedevil European users.

Using personal data to create new products

Section 101(a)(1) of the ADPPA expressly allows the use of “covered data” (personal data) to “provide or maintain a specific product or service requested by an individual.” But the legislation is murkier when it comes to the permissible uses of covered data to develop new products. This would only clearly be allowed where each data subject concerned could be asked if they “request” the specific future product. By contrast, under the GDPR, it is clear that a firm can ask for user consent to use their data to develop future products.

Moving beyond Section 101, we can look to the “general exceptions” in Section 209 of the ADPPA, specifically the exception in Section 209(a)(2)):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to perform system maintenance, diagnostics, maintain a product or service for which such covered data was collected, conduct internal research or analytics to improve products and services, perform inventory management or network management, or debug or repair errors that impair the functionality of a service or product for which such covered data was collected by the covered entity, except such data shall not be transferred.

While this provision mentions conducting “internal research or analytics to improve products and services,” it also refers to “a product or service for which such covered data was collected.” The concern here is that this could be interpreted as only allowing “research or analytics” in relation to existing products known to the data subject.

The road ends here for personal data that the firm collects itself. Somewhat paradoxically, the firm could more easily make the case for using data obtained from a third party. Under Section 302(b) of the ADPPA, a firm only has to ensure that it is not processing “third party data for a processing purpose inconsistent with the expectations of a reasonable individual.” Such a relatively broad “reasonable expectations” basis is not available for data collected directly by first-party covered entities.

Under the GDPR, aside from the data subject’s consent, the firm also could rely on its own “legitimate interest” as a lawful basis to process user data to develop new products. It is true, however, that due to requirements that the interests of the data controller and the data subject must be appropriately weighed, the “legitimate interest” basis is probably less popular in the EU than alternatives like consent or contractual necessity.

Developing this path in the ADPPA would arguably provide a more sensible basis for data uses like the reuse of data for new product development. This could be superior even to express consent, which faces problems like “consent fatigue.” These are unlikely to be solved by promulgating detailed rules on “affirmative consent,” as proposed in Section 2 of the ADPPA.

Problems with ‘de-identified data’

Another example of significant confusion in the ADPPA’s the basic conceptual scheme is the bill’s notion of “de-identified data.” The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. The definition covers: “information that does not identify and is not linked or reasonably linkable to an individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data, but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to non-personal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” (that is, all data that are not considered “covered” data):

  1. to take “reasonable measures to ensure that the information cannot, at any point, be used to re-identify any individual or device”;
  2. to publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de- identified form without any reasonable means for re- identification; and
    1. to not attempt to re-identify the information with any individual or device;”
  3. to “contractually obligate[] any person or entity that receives the information from the covered entity to comply with all of the” same rules.

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from effective attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification. Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is effectively to impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Conclusion

The basic conceptual structure of the legislation that subcommittee members will take up this week is, to a very significant extent, both confused and confusing. Perhaps in tomorrow’s markup, a more open and detailed discussion of what the drafters were trying to achieve could help to improve the scheme, as it seems that some key provisions of the current draft would lead to absurd results (e.g., those directly contrary to the principle of data minimization).

Given that the GDPR is already a well-known point of reference, including for U.S.-based companies and privacy professionals, the ADPPA may do better to re-use the best features of the GDPR’s conceptual structure while cutting its excesses. Re-inventing the wheel by proposing new concepts did not work well in this ADPPA draft.

The European Union’s Digital Markets Act (DMA) has been finalized in principle, although some legislative details are still being negotiated. Alas, our earlier worries about user privacy still have not been addressed adequately.

The key rules to examine are the DMA’s interoperability mandates. The most recent DMA text introduced a potentially very risky new kind of compulsory interoperability “of number-independent interpersonal communications services” (e.g., for services like WhatsApp). However, this obligation comes with a commendable safeguard in the form of an equivalence standard: interoperability cannot lower the current level of user security. Unfortunately, the DMA’s other interoperability provisions lack similar security safeguards.

The lack of serious consideration of security issues is perhaps best illustrated by how the DMA might actually preclude makers of web browsers from protecting their users from some of the most common criminal attacks, like phishing.

Key privacy concern: interoperability mandates

​​The original DMA proposal included several interoperability and data-portability obligations regarding the “core platform services” of platforms designated as “gatekeepers”—i.e., the largest online platforms. Those provisions were changed considerably during the legislative process. Among its other provisions, the most recent (May 11, 2022) version of the DMA includes:

  1. a prohibition on restricting users—“technically or otherwise”—from switching among and subscribing to software and services “accessed using the core platform services of the gatekeeper” (Art 6(6));
  2. an obligation for gatekeepers to allow interoperability with their operating system or virtual assistant (Art 6(7)); and
  3. an obligation “on interoperability of number-independent interpersonal communications services” (Art 7).

To varying degrees, these provisions attempt to safeguard privacy and security interests, but the first two do so in a clearly inadequate way.

First, the Article 6(6) prohibition on restricting users from using third-party software or services “accessed using the core platform services of the gatekeeper” notably applies to web services (web content) that a user can access through the gatekeeper’s web browser (e.g., Safari for iOS). (Web browsers are defined as core platform services in Art 2(2) DMA.)

Given that web content is typically not installed in the operating system, but accessed through a browser (i.e., likely “accessed using a core platform service of the gatekeeper”), earlier “side-loading” provisions (Article 6(4), which is discussed further below) would not apply here. This leads to what appears to be a significant oversight: the gatekeepers appear to be almost completely disabled from protecting their users when they use the Internet through web browsers, one of the most significant channels of privacy and security risks.

The Federal Bureau of Investigation (FBI) has identified “phishing” as one of the three top cybercrime types, based on the number of victim complaints. A successful phishing attack normally involves a user accessing a website that is impersonating a service the user trusts (e.g., an email account or corporate login). Browser developers can prevent some such attacks, e.g., by keeping “block lists” of websites known to be malicious and warning about, or even preventing, access to such sites. Prohibiting platforms from restricting their users’ access to third-party services would also prohibit this vital cybersecurity practice.

Under Art 6(4), in the case of installed third-party software, the gatekeepers can take:

…measures to ensure that third party software applications or software application stores do not endanger the integrity of the hardware or operating system provided by the gatekeeper, provided that such measures go no further than is strictly necessary and proportionate and are duly justified by the gatekeeper.

The gatekeepers can also apply:

measures and settings other than default settings, enabling end users to effectively protect security in relation to third party software applications or software application stores, provided that such measures and settings go no further than is strictly necessary and proportionate and are duly justified by the gatekeeper.

None of those safeguards, insufficient as they are—see the discussion below of Art 6(7)—are present in Art 6(6). Worse still is that the anti-circumvention rule in Art 13(6) applies here, prohibiting gatekeepers from offering “choices to the end-user in a non-neutral manner.” That is precisely what a web-browser developer does when warning users of security risks or when blocking access to websites known to be malicious—e.g., to protect users from phishing attacks.

This concern is not addressed by the general provision in Art 8(1) requiring the gatekeepers to ensure “that the implementation” of the measures under the DMA complies with the General Data Protection Regulation (GDPR), as well as “legislation on cyber security, consumer protection, product safety.”

The first concern is that this would not allow the gatekeepers to offer a higher standard of user protection than that required by the arguably weak or overly vague existing legislation. Also, given that the DMA’s rules (including future delegated legislation) are likely to be more specific—in the sense of constituting lex specialis—than EU rules on privacy and security, establishing a coherent legal interpretation that would allow gatekeepers to protect their users is likely to be unnecessarily difficult.

Second, the obligation from Art 6(7) for gatekeepers to allow interoperability with their operating system or virtual assistant only includes the first kind of a safeguard from Art 6(4), concerning the risk of compromising “the integrity of the operating system, virtual assistant or software features provided by the gatekeeper.” However, the risks from which service providers aim to protect users are by no means limited to system “integrity.” A user may be a victim of, e.g., a phishing attack that does not explicitly compromise the integrity of the software they used.

Moreover, as in Art 6(4), there is a problem with the “strictly necessary and proportionate” qualification. This standard may be too high and may push gatekeepers to offer more lax security to avoid liability for adopting measures that would be judged by European Commission and the courts as going beyond what is strictly necessary or indispensable.

The relevant recitals from the DMA preamble, instead of aiding in interpretation, add more confusion. The most notorious example is in recital 50, which states that gatekeepers “should be prevented from implementing” measures that are “strictly necessary and proportionate” to effectively protect user security “as a default setting or as pre-installation.” What possible justification can there be for prohibiting providers from setting a “strictly necessary” security measure as a default? We can hope that this manifestly bizarre provision will be corrected in the final text, together with the other issues identified above.

Finally, there is the obligation “on interoperability of number-independent interpersonal communications services” from Art 7. Here, the DMA takes a different and much better approach to safeguarding user privacy and security. Art 7(3) states that:

The level of security, including the end-to-end encryption, where applicable, that the gatekeeper provides to its own end users shall be preserved across the interoperable services.

There may be some concern that the Commission or the courts will not treat this rule with sufficient seriousness. Ensuring that user security is not compromised by interoperability may take a long time and may require excluding many third-party services that had hoped to benefit from this DMA rule. Nonetheless, EU policymakers should resist watering down the standard of equivalence in security levels, even if it renders Art 7 a dead letter for the foreseeable future.

It is also worth noting that there will be no presumption of user opt-in to any interoperability scheme (Art 7(7)-(8)), which means that third-party service providers will not be able to simply “onboard” all users from a gatekeeper’s service without their explicit consent. This is to be commended.

Conclusion

Despite some improvements (the equivalence standard in Art 7(3) DMA), the current DMA language still betrays, as I noted previously, “a policy preference for privileging uncertain and speculative competition gains at the cost of introducing new and clear dangers to information privacy and security.” Jane Bambauer of the University of Arizona Law School came to similar conclusions in her analysis of the DMA, in which she warned:

EU lawmakers should be aware that the DMA is dramatically increasing the risk that data will be mishandled. Nevertheless, even though a new scandal from the DMA’s data interoperability requirement is entirely predictable, I suspect EU regulators will evade public criticism and claim that the gatekeeping platforms are morally and financially responsible.

The DMA’s text is not yet entirely finalized. It may still be possible to extend the approach adopted in Article 7(3) to other privacy-threatening rules, especially in Article 6. Such a requirement that any third-party service providers offer at least the same level of security as the gatekeepers is eminently reasonable and is likely what the users themselves would expect. Of course, there is always a risk that a safeguard of this kind will be effectively nullified in administrative or judicial practice, but this may not be very likely, given the importance that EU courts typically attach to privacy.

Banco Central do Brasil (BCB), Brazil’s central bank, launched a new real-time payment (RTP) system in November 2020 called Pix. Evangelists at the central bank hoped that Pix would offer a low-cost alternative to existing payments systems and would entice some of the country’s tens of millions of unbanked and underbanked adults into the banking system.

A recent review of Pix, published by the Bank for International Settlements, claims that the payment system has achieved these goals and that it is a model for other jurisdictions. However, the BIS review seems to have been written with rose-tinted spectacles. This is perhaps not surprising, given that the lead author runs the division of the central bank that developed Pix. In a critique published this week, I suggest that, when seen in full color, Pix looks a lot less pretty. 

Among other things, the BIS review misconstrues the economics of payment networks. By ignoring the two-sided nature of such networks, the authors claim erroneously that payment cards incur a net economic cost. In fact, evidence shows that payment cards generate net benefits. One study put their value add to the Brazilian economy at 0.17% of GDP. 

The report also obscures the costs of the Pix system and fails to explain that, whereas private payment systems must recover their full operational cost, Pix appears to benefit from both direct and indirect subsidies. The direct subsidies come from the BCB, which incurred substantial costs in developing and promoting Pix and, unlike other central banks such as the U.S. Federal Reserve, is not required to recover all operational costs. Indirect subsidies come from the banks and other payment-service providers (PSPs), many of which have been forced by the BCB to provide Pix to their clients, even though doing so cannibalizes their other payment systems, including interchange fees earned from payment cards. 

Moreover, the BIS review mischaracterizes the role of interchange fees, which are often used to encourage participation in the payment-card network. In the case of debit cards, this often includes covering some or all of the operational costs of bank accounts. The availability of “free” bank accounts with relatively low deposit requirements offers customers incentives to open and maintain accounts. 

While the report notes that Pix has “signed up” 67% of adult Brazilians, it fails to mention that most of these were automatically enrolled by their banks, the majority of which were required by the BCB to adopt Pix. It also fails to mention that 33% of adult Brazilians have not “signed up” to Pix, nor that a recent survey found that more than 20% of adult Brazilians remain unbanked or underbanked, nor that the main reason given for not having a bank account was the cost of such accounts. Moreover, by diverting payments away from debit cards, Pix has reduced interchange fees and thereby reduced the ability of banks and other PSPs to subsidize bank accounts, which might otherwise have increased financial inclusion.  

The BIS review falsely asserts that “Big Tech” payment networks are able to establish and maintain market power. In reality, tech firms operate in highly competitive markets and have little to no market power in payment networks. Nonetheless, the report uses this claim regarding Big Tech’s alleged market power to justify imposing restrictions on the WhatsApp payment system. The irony, of course, is that by moving to prohibit the WhatsApp payment service shortly before the rollout of Pix, the BCB unfairly inhibited competition, effectively giving Pix a monopoly on RTP with the full support of the government. 

In acting as both a supplier of a payment service and the regulator of payment service providers, the BCB has a massive conflict of interest. Indeed, the BIS itself has recommended that, in cases where such conflicts might exist, it is good practice to ensure that the regulator is clearly separated from the supplier. Pix, in contrast, was developed and promoted by the same part of the bank as the payments regulator. 

Finally, the BIS report also fails to address significant security issues associated with Pix, including a dramatic rise in the number of “lightning kidnappings” in which hostages were forced to send funds to Pix addresses. 

[The following is a guest post from Andrew Mercado, a research assistant at the Mercatus Center at George Mason University and an adjunct professor and research assistant at George Mason’s Antonin Scalia Law School.]

Barry Schwartz’s seminal work “The Paradox of Choice” has received substantial attention since its publication nearly 20 years ago. In it, Schwartz argued that, faced with an ever-increasing plethora of products to choose from, consumers often feel overwhelmed and seek to limit the number of choices they must make.

In today’s online digital economy, a possible response to this problem is for digital platforms to use consumer data to present consumers with a “manageable” array of choices and thereby simplify their product selection. Appropriate “curation” of product-choice options may substantially benefit consumer welfare, provided that government regulators stay out of the way.   

New Research

In a new paper in the American Economic Review, Mark Armstrong and Jidong Zhou—of Oxford and Yale universities, respectively—develop a theoretical framework to understand how companies compete using consumer data. Their findings conclude that there is, in fact, an impact on consumer, producer, and total welfare when different privacy regimes are enacted to change the amount of information a company can use to personalize recommendations.

The authors note that, at least in theory, there is an optimal situation that maximizes total welfare (scenario one). This is when a platform can aggregate information on consumers to such a degree that buyers and sellers are perfectly matched, leading to consumers buying their first-best option. While this can result in marginally higher prices, understandably leading to higher welfare for producers, search and mismatch costs are minimized by the platform, leading to a high level of welfare for consumers.

The highest level of aggregate consumer welfare comes when product differentiation is minimized (scenario two), leading to a high number of substitutes and low prices. This, however, comes with some level of mismatch. Since consumers are not matched with any recommendations, search costs are high and introduce some error. Some consumers may have had a higher level of welfare with an alternative product, but do not feel the negative effects of such mismatch because of the low prices. Therefore, consumer welfare is maximized, but producer welfare is significantly lower.

Finally, the authors suggest a “nearly total welfare” optimal solution in suggesting a “top two-best” scheme (scenario three), whereby consumers are shown their top two best options without explicit ranking. This nearly maximizes total welfare, since consumers are shown the best options for them and, even if the best match isn’t chosen, the second-best match is close in terms of welfare.

Implications

In cases of platform data aggregation and personalization, scenarios one, two, and three can be represented as different privacy regimes.

Scenario one (a personalized-product regime) is akin to unlimited data gathering, whereby platforms can use as much information as is available to perfectly suggest products based on revealed data. From a competition perspective, interfirm competition will tend to decrease under this regime, since product differentiation will be accentuated, and substitutability will be masked. Since one single product will be shown as the “correct” product, the consumer will not want to shift to a different, welfare-inferior product and firms have incentive to produce ever more specialized products for a relatively higher price. Total welfare under this regime is maximized, with producers using their information to garner a relatively large share of economic surplus. Producers are effectively matched with consumers, and all gains from trade are realized.

Scenario two (a data-privacy regime) is one of near-perfect data privacy, whereby the platform is only able to recommend products based on general information, such as sales trends, new products, or product specifications. Under this regime, competition is maximized, since consumers consider a large pool of goods to be close substitutes. Differences in offered products are downplayed, which has the tendency to reduce prices and increase quality, but at the tradeoff of some consumer-product mismatch. For consumers who want a general product and a low price, this is likely the best option, since prices are low, and competition is high. However, for consumers who want the best product match for their personal use case, they will likely undertake search costs, increasing their opportunity cost of product acquisition and tending toward a total cost closer to the cost under a personalized-product regime.

Scenario three (a curated-list regime) represents defined guardrails surrounding the display of information gathered, along the same lines as the personalized-product regime. Platforms remain able to gather as much information as they desire in order to make a personalized recommendation, but they display an array of products that represent the first two (or three to four, with tighter anti-preference rules) best-choice options. These options are displayed without ranking the products, allowing the consumer to choose from a curated list, rather than a single product. The scenario-three regime has two effects on the market:

  1. It will tend to decrease prices through increased competition. Since firms can know only which consumers to target, not which will choose the product, they have to effectively compete with closely related products.
  2. It will likely spur innovation and increase competition from nascent competitors.

From an innovation perspective, firms will have to find better methods to differentiate themselves from the competition, increasing the probability of a consumer acquiring their product. Also, considering nascent competitors, a new product has an increased chance of being picked when ranked sufficiently high to be included on the consumer’s curated list. In contrast, the probability of acquisition under scenario one’s personalized-product regime is low, since the new product must be a better match than other, existing products. Similarly, under scenario two’s data-privacy regime, there is so much product substitutability in the market that the probability of choosing any one new product is low.

Below is a list of how the regimes stack up:

  • Personalized-Product: Total welfare is maximized, but prices are relatively higher and competition is relatively lower than under a data-privacy regime.
  • Data-Privacy: Consumer welfare and competition are maximized, and prices are theoretically minimized, but at the cost of product mismatch. Consumers will face search costs that are not reflected in the prices paid.
  • Curated-List: Consumer welfare is higher and prices are lower than under a personalized-product regime and competition is lower than under a data-privacy regime, but total welfare is nearly optimal when considering innovation and nascent-competitor effects.

Policy in Context

Applying these theoretical findings to fashion administrable policy prescriptions is understandably difficult. A far easier task is to evaluate the welfare effects of actual and proposed government privacy regulations in the economy. In that light, I briefly assess a recently enacted European data-platform privacy regime and U.S. legislative proposals that would restrict data usage under the guise of bans on “self-preferencing.” I then briefly note the beneficial implications of self-preferencing associated with the two theoretical data-usage scenarios (scenarios one and three) described above (scenario two, data privacy, effectively renders self-preferencing ineffective). 

GDPR

The European Union’s General Data Protection Regulation (GDPR)—among the most ambitious and all-encompassing data-privacy regimes to date—has significant negative ramifications for economic welfare. This regulation is most like the second scenario, whereby data collection and utilization are seriously restricted.

The GDPR diminishes competition through its restrictions on data collection and sharing, which reduce the competitive pressure platforms face. For platforms to gain a complete profile of a consumer for personalization, they cannot only rely on data collected on their platform. To ensure a level of personalization that effectively reduces search costs for consumers, these platforms must be able to acquire data from a range of sources and aggregate that data to create a complete profile. Restrictions on aggregation are what lead to diminished competition online.

The GDPR grants consumers the right to choose both how their data is collected and how it is distributed. Not only do platforms themselves have obligations to ensure consumers’ wishes are met regarding their privacy, but firms that sell data to the platform are obligated to ensure the platform does not infringe consumers’ privacy through aggregation.

This creates a high regulatory burden for both the platform and the data seller and reduces the incentive to transfer data between firms. Since the data seller can be held liable for actions taken by the platform, this significantly increases the price at which the data seller will transfer the data. By increasing the risk of regulatory malfeasance, the cost of data must now incorporate some risk premium, reducing the demand for outside data.

This has the effect of decreasing the quality of personalization and tilting the scales toward larger platforms, who have more robust data-collection practices and are able to leverage economies of scale to absorb high regulatory-enforcement costs. The quality of personalization is decreased, since the platform has incentive to create a consumption profile based on activity it directly observes without considering behavior occurring outside of the platform. Additionally, those platforms that are already entrenched and have large user bases are better able to manage the regulatory burden of the GDPR. One survey of U.S. companies with more than 500 workers found that 68% planned to spend between $1 and $10 million in upfront costs to prepare for GDPR compliance, a number that will likely pale in comparison to the long-term compliance costs. For nascent competitors, this outlay of capital represents a significant barrier to entry.

Additionally, as previously discussed, consumers derive some benefit from platforms that can accurately recommend products. If this is the case, then large platforms with vast amounts of accumulated, first-party data will be the consumers’ destination of choice. This will tend to reduce the ability for smaller firms to compete, simply because they do not have access to the same scale of data as the large platforms when data cannot be easily transferred between parties.

SelfPreferencing

Claims of anticompetitive behavior by platforms are abundant (e.g., see here and here), and they often focus on the concept of self-preferencing. Self-preferencing refers to when a company uses its economies of scale, scope, or a combination of the two to offer products at a lower price through an in-house brand. In decrying self-preferencing, many commentators and politicians point to an alleged “unfair advantage” in tech platforms’ ability to leverage data and personalization to drive traffic toward their own products.

It is far from clear, however, that this practice reduces consumer welfare. Indeed, numerous commentaries (e.g., see here and here) circulated since the introduction of anti-preferencing bills in the U.S. Congress (House; Senate) have rejected the notion that self-preferencing is anti-competitive or anti-consumer.

There are good reasons to believe that self-preferencing promotes both competition and consumer welfare. Assume that a company that manufactures or contracts for its own, in-house products can offer them at a marginally lower price for the same relative quality. This decrease in price raises consumer welfare. The in-house brand’s entrance into the market represents a potent competitive threat to firms already producing products, who in turn now have incentive to lower their own prices or raise the quality of their own goods (or both) to maintain their consumer base. This creates even more consumer welfare, since all consumers, not just the ones purchasing the in-house goods, are better off from the entrance of an in-house brand.

It therefore follows that the entrance of an in-house brand and self-preferencing in the data-utilizing regimes discussed above has the potential to enhance consumer welfare.

In general, the use of data analysis on the platform can allow for targeted product entrance into certain markets. If the platform believes it can make a product of similar quality for a lower price, then it will enter that market and consumers will be able to choose a comparable product for a lower price. (If the company does not believe it is able to produce such a product, it will not enter the market with an in-house brand, and consumer welfare will stay the same.) Consumer welfare will further rise as firms producing products that compete against the in-house brand will innovate to compete more effectively.

To be sure, under a personalized-product regime (scenario one), platforms may appear to have an incentive to self-preference to the detriment of consumers. If consumers trust the platform to show the greatest welfare-producing product before the emergence of an in-house brand, the platform may use this consumer trust to its advantage and suggest its own, potentially consumer-welfare-inferior product instead of a competitor’s welfare-superior product. In such a case, consumer welfare may decrease in the face of an in-house brand’s entrance.

The extent of any such welfare loss, however, may be ameliorated (or eliminated entirely) by the platform’s concern that an unexpectedly low level of house-brand product quality will diminish its reputation. Such a reputational loss could come about due to consumer disappointment, plus the efforts of platform rivals to highlight the in-house product’s inferiority. As such, the platform might decide to enhance the quality of its “inferior” in-house offering, or refrain from offering an in-house brand at all.

A curated-list regime (scenario three) is unequivocally consumer-welfare beneficial. Under such a regime, consumers will be shown several more options (a “manageable” number intended to minimize consumer-search costs) than under a personalized-product regime. Consumers can actively compare the offerings from different firms to determine the correct product for their individual use. In this case, there is no incentive to self-preference to the detriment of the consumer, as the consumer is able to make value judgements between the in-house brand and the alternatives.

If the in-house brand is significantly lower in price, but also lower in quality, consumers may not see the two as interchangeable and steer away from the in-house brand. The same follows when the in-house brand is higher in both price and quality. The only instance where the in-house brand has a strong chance of success is when the price is lower than and the quality is greater than competing products. This will tend to increase consumer welfare. Additionally, the entrance of consumer-welfare-superior products into a competitive market will encourage competing firms to innovate and lower prices or raise quality, again increasing consumer welfare for all consumers.

Conclusion

What effects do digital platform-data policies have on consumer welfare? As a matter of theory, if providing an increasing number of product choices does not tend to increase consumer welfare, then do reductions in prices or increases in quality? What about precise targeting of personal-product choices? How about curation—the idea that a consumer raises his or her level of certainty by outsourcing decision-making to a platform that chooses a small set of products for the consumer’s consideration at any given moment? Apart from these theoretical questions, is the current U.S. legal treatment of platform data usage doing a generally good job of promoting consumer welfare? Finally, considering this overview, are new government interventions in platform data policy likely to benefit or harm consumers?

Recently published economic research develops theoretical scenarios that demonstrate how digital platform curation of consumer data may facilitate welfare-enhancing consumer-purchase decisions. At least implicitly, this research should give pause to proponents of major new restrictions of platform data usage.

Furthermore, a review of actual and proposed regulatory restrictions underscores the serious welfare harm of government meddling in digital platform-data usage.   

After the first four years of GDPR, it is clear that there have been significant negative unintended consequences stemming from omnibus privacy regulation. Competition has decreased, regulatory barriers to entry have increased, and consumers are marginally worse off. Since companies are less able and willing to leverage data in their operations and service offerings—due in large part to the risk of hefty fines—they are less able to curate and personalize services to consumers.

Additionally, anti-preferencing bills in the United States threaten to suppress the proper functioning of platform markets and reduce consumer welfare by making the utilization of data in product-market decisions illegal. More research is needed to determine the aggregate welfare effects of such preferencing on platforms, but all early indications point to the fact that consumers are better off when an in-house brand enters the market and increases competition.

Furthermore, current U.S. government policy, which generally allows platforms to use consumer data freely, is good for consumer welfare. Indeed, the consumer-welfare benefits generated by digital platforms, which depend critically on large volumes of data, are enormous. This is documented in a well-reasoned Harvard Business Review article (by an MIT professor and his student) that utilizes online choice experiments based on digital-survey techniques.

The message is clear. Governments should avoid new regulatory meddling in digital platform consumer-data usage practices. Such meddling would harm consumers and undermine the economy.

The Federal Trade Commission (FTC) is at it again, threatening new sorts of regulatory interventions in the legitimate welfare-enhancing activities of businesses—this time in the realm of data collection by firms.

Discussion

In an April 11 speech at the International Association of Privacy Professionals’ Global Privacy Summit, FTC Chair Lina Khan set forth a litany of harms associated with companies’ data-acquisition practices. Certainly, fraud and deception with respect to the use of personal data has the potential to cause serious harm to consumers and is the legitimate target of FTC enforcement activity. At the same time, the FTC should take into account the substantial benefits that private-sector data collection may bestow on the public (see, for example, here, here, and here) in order to formulate economically beneficial law-enforcement protocols.

Chair Khan’s speech, however, paid virtually no attention to the beneficial side of data collection. To the contrary, after highlighting specific harmful data practices, Khan then waxed philosophical in condemning private data-collection activities (citations omitted):

Beyond these specific harms, the data practices of today’s surveillance economy can create and exacerbate deep asymmetries of information—exacerbating, in turn, imbalances of power. As numerous scholars have noted, businesses’ access to and control over such vast troves of granular data on individuals can give those firms enormous power to predict, influence, and control human behavior. In other words, what’s at stake with these business practices is not just one’s subjective preference for privacy, but—over the long term—one’s freedom, dignity, and equal participation in our economy and society.

Even if one accepts that private-sector data practices have such transcendent social implications, are the FTC’s philosopher kings ideally equipped to devise optimal policies that promote “freedom, dignity, and equal participation in our economy and society”? Color me skeptical. (Indeed, one could argue that the true transcendent threat to society from fast-growing growing data collection comes not from businesses but, rather, from the government, which unlike private businesses holds a legal monopoly on the right to use or authorize the use of force. This question is, however, beyond the scope of my comments.)

Chair Khan turned from these highfalutin musings to a more prosaic and practical description of her plans for “adapting the commission’s existing authority to address and rectify unlawful data practices.” She stressed “focusing on firms whose business practices cause widespread harm”; “assessing data practices through both a consumer protection and competition lens”; and “designing effective remedies that are informed by the business strategies that specific markets favor and reward.” These suggestions are not inherently problematic, but they need to be fleshed out in far greater detail. For example, there are potentially major consumer-protection risks posed by applying antitrust to “big data” problems (see here, here and here, for example).

Khan ended her presentation by inviting us “to consider how we might need to update our [FTC] approach further yet.” Her suggested “updates” raise significant problems.

First, she stated that the FTC “is considering initiating a rulemaking to address commercial surveillance and lax data security practices.” Even assuming such a rulemaking could withstand legal scrutiny (its best shot would be to frame it as a consumer protection rule, not a competition rule), it would pose additional serious concerns. One-size-fits-all rules prevent consideration of possible economic efficiencies associated with specific data-security and surveillance practices. Thus, some beneficial practices would be wrongly condemned. Such rules would also likely deter firms from experimenting and innovating in ways that could have led to improved practices. In both cases, consumer welfare would suffer.

Second, Khan asserted “the need to reassess the frameworks we presently use to assess unlawful conduct. Specifically, I am concerned that present market realities may render the ‘notice and consent’ paradigm outdated and insufficient.” Accordingly, she recommended that “we should approach data privacy and security protections by considering substantive limits rather than just procedural protections, which tend to create process requirements while sidestepping more fundamental questions about whether certain types of data collection should be permitted in the first place.”  

In support of this startling observation, Khan approvingly cites Daniel Solove’s article “The Myth of the Privacy Paradox,” which claims that “[t]he fact that people trade their privacy for products or services does not mean that these transactions are desirable in their current form. … [T]he mere fact that people make a tradeoff doesn’t mean that the tradeoff is fair, legitimate, or justifiable.”

Khan provides no economic justification for a data-collection ban. The implication that the FTC would consider banning certain types of otherwise legal data collection is at odds with free-market principles and would have disastrous economic consequences for both consumers and producers. It strikes at voluntary exchange, a basic principle of market economics that benefits transactors and enables markets to thrive.

Businesses monetize information provided by consumers to offer a host of goods and services that satisfy consumer interests. This is particularly true in the case of digital platforms. Preventing the voluntary transfer of data from consumers to producers based on arbitrary government concerns about “fairness” (for example) would strike at firms’ ability to monetize data and thereby generate additional consumer and producer surplus. The arbitrary destruction of such potential economic value by government fiat would be the essence of “unfairness.”

In particular, the consumer welfare benefits generated by digital platforms, which depend critically on large volumes of data, are enormous. As Erik Brynjolfsson of the Massachusetts Institute of Technology and his student Avinash Collis explained in a December 2019 article in the Harvard Business Review, such benefits far exceed those measured by conventional GDP. Online choice experiments based on digital-survey techniques enabled the authors “to estimate the consumer surplus for a great variety of goods, including free ones that are missing from GDP statistics.” Brynjolfsson and Collis found, for example, that U.S. consumers derived $231 billion in value from Facebook since its inception in 2004. Furthermore:

[O]ur estimates indicate that the [Facebook] platform generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. In contrast, average revenue per user is only around $140 per year in United States and $44 per year in Europe. In other words, Facebook operates one of the most advanced advertising platforms, yet its ad revenues represent only a fraction of the total consumer surplus it generates. This reinforces research by NYU Stern School’s Michael Spence and Stanford’s Bruce Owen that shows that advertising revenues and consumer surplus are not always correlated: People can get a lot of value from content that doesn’t generate much advertising, such as Wikipedia or email. So it is a mistake to use advertising revenues as a substitute for consumer surplus…

In a similar vein, the authors found that various user-fee-based digital services yield consumer surplus five to ten times what users paid to access them. What’s more:

The effect of consumer surplus is even stronger when you look at categories of digital goods. We conducted studies to measure it for the most popular categories in the United States and found that search is the most valued category (with a median valuation of more than $17,000 a year), followed by email and maps. These categories do not have comparable off-line substitutes, and many people consider them essential for work and everyday life. When we asked participants how much they would need to be compensated to give up an entire category of digital goods, we found that the amount was higher than the sum of the value of individual applications in it. That makes sense, since goods within a category are often substitutes for one another.

In sum, the authors found:

To put the economic contributions of digital goods in perspective, we find that including the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017. During this period, GDP rose by an average of 1.83% a year. Clearly, GDP has been substantially underestimated over that time.

Although far from definitive, this research illustrates how a digital-services model, based on voluntary data transfer and accumulation, has brought about enormous economic welfare benefits. Accordingly, FTC efforts to tamper with such a success story on abstruse philosophical grounds not only would be unwarranted, but would be economically disastrous. 

Conclusion

The FTC clearly plans to focus on “abuses” in private-sector data collection and usage. In so doing, it should hone in on those practices that impose clear harm to consumers, particularly in the areas of deception and fraud. It is not, however, the FTC’s role to restructure data-collection activities by regulatory fiat, through far-reaching inflexible rules and, worst of all, through efforts to ban collection of “inappropriate” information.

Such extreme actions would predictably impose substantial harm on consumers and producers. They would also slow innovation in platform practices and retard efficient welfare-generating business initiatives tied to the availability of broad collections of data. Eventually, the courts would likely strike down most harmful FTC data-related enforcement and regulatory initiatives, but substantial welfare losses (including harm due to a chilling effect on efficient business conduct) would be borne by firms and consumers in the interim. In short, the enforcement “updates” Khan recommends would reduce economic welfare—the opposite of what (one assumes) is intended.

For these reasons, the FTC should reject the chair’s overly expansive “updates.” It should instead make use of technologists, economists, and empirical research to unearth and combat economically harmful data practices. In doing so, the commission should pay attention to cost-benefit analysis and error-cost minimization. One can only hope that Khan’s fellow commissioners promptly endorse this eminently reasonable approach.