Archives For International

A White House administration typically announces major new antitrust initiatives in the fall and spring, and this year is no exception. Senior Biden administration officials kicked off the fall season at Fordham Law School (more on that below) by shedding additional light on their plans to expand the accepted scope of antitrust enforcement.

Their aggressive enforcement statements draw headlines, but will the administration’s neo-Brandeisians actually notch enforcement successes? The prospects are cloudy, to say the least.

The U.S. Justice Department (DOJ) has lost some cartel cases in court this year (what was the last time that happened?) and, on Sept. 19, a federal judge rejected the DOJ’s attempt to enjoin United Health’s $13.8 billion bid for Change Healthcare. The Federal Trade Commission (FTC) recently lost two merger challenges before its in-house administrative law judge. It now faces a challenge to its administrative-enforcement processes before the U.S. Supreme Court (the Axon case, to be argued in November).

(Incidentally, on the other side of the Atlantic, the European Commission has faced some obstacles itself. Despite its recent Google victory, the Commission has effectively lost two abuse of dominance cases this year—the Intel and Qualcomm matters—before the European General Court.)

So, are the U.S. antitrust agencies chastened? Will they now go back to basics? Far from it. They enthusiastically are announcing plans to charge ahead, asserting theories of antitrust violations that have not been taken seriously for decades, if ever. Whether this turns out to be wise enforcement policy remains to be seen, but color me highly skeptical. Let’s take a quick look at some of the big enforcement-policy ideas that are being floated.

Fordham Law’s Antitrust Conference

Admiral David Farragut’s order “Damn the torpedoes, full speed ahead!” was key to the Union Navy’s August 1864 victory in the Battle of Mobile Bay, a decisive Civil War clash. Perhaps inspired by this display of risk-taking, the heads of the two federal antitrust agencies—DOJ Assistant Attorney General (AAG) Jonathan Kanter and FTC Chair Lina Khan—took a “damn the economics, full speed ahead” attitude in remarks at the Sept. 16 session of Fordham Law School’s 49th Annual Conference on International Antitrust Law and Policy. Special Assistant to the President Tim Wu was also on hand and emphasized the “all of government” approach to competition policy adopted by the Biden administration.

In his remarks, AAG Kanter seemed to be endorsing a “monopoly broth” argument in decrying the current “Whac-a-Mole” approach to monopolization cases. The intent may be to lessen the burden of proof of anticompetitive effects, or to bring together a string of actions taken jointly as evidence of a Section 2 violation. In taking such an approach, however, there is a serious risk that efficiency-seeking actions may be mistaken for exclusionary tactics and incorrectly included in the broth. (Notably, the U.S. Court of Appeals for the D.C. Circuit’s 2001 Microsoft opinion avoided the monopoly-broth problem by separately discussing specific company actions and weighing them on their individual merits, not as part of a general course of conduct.)

Kanter also recommended going beyond “our horizontal and vertical framework” in merger assessments, despite the fact that vertical mergers (involving complements) are far less likely to be anticompetitive than horizontal mergers (involving substitutes).

Finally, and perhaps most problematically, Kanter endorsed the American Innovative and Choice Online Act (AICOA), citing the protection it would afford “would-be competitors” (but what about consumers?). In so doing, the AAG ignored the fact that AICOA would prohibit welfare-enhancing business conduct and could be harmfully construed to ban mere harm to rivals (see, for example, Stanford professor Doug Melamed’s trenchant critique).

Chair Khan’s presentation, which called for a far-reaching “course correction” in U.S. antitrust, was even more bold and alarming. She announced plans for a new FTC Act Section 5 “unfair methods of competition” (UMC) policy statement centered on bringing “standalone” cases not reachable under the antitrust laws. Such cases would not consider any potential efficiencies and would not be subject to the rule of reason. Endorsing that approach amounts to an admission that economic analysis will not play a serious role in future FTC UMC assessments (a posture that likely will cause FTC filings to be viewed skeptically by federal judges).

In noting the imminent release of new joint DOJ-FTC merger guidelines, Khan implied that they would be animated by an anti-merger philosophy. She cited “[l]awmakers’ skepticism of mergers” and congressional rejection “of economic debits and credits” in merger law. Khan thus asserted that prior agency merger guidance had departed from the law. I doubt, however, that many courts will be swayed by this “economics free” anti-merger revisionism.

Tim Wu’s remarks closing the Fordham conference had a “big picture” orientation. In an interview with GW Law’s Bill Kovacic, Wu briefly described the Biden administration’s “whole of government” approach, embodied in President Joe Biden’s July 2021 Executive Order on Promoting Competition in the American Economy. While the order’s notion of breaking down existing barriers to competition across the American economy is eminently sound, many of those barriers are caused by government restrictions (not business practices) that are not even alluded to in the order.

Moreover, in many respects, the order seeks to reregulate industries, misdiagnosing many phenomena as business abuses that actually represent efficient free-market practices (as explained by Howard Beales and Mark Jamison in a Sept. 12 Mercatus Center webinar that I moderated). In reality, the order may prove to be on net harmful, rather than beneficial, to competition.

Conclusion

What is one to make of the enforcement officials’ bold interventionist screeds? What seems to be missing in their presentations is a dose of humility and pragmatism, as well as appreciation for consumer welfare (scarcely mentioned in the agency heads’ presentations). It is beyond strange to see agencies that are having problems winning cases under conventional legal theories floating novel far-reaching initiatives that lack a sound economics foundation.

It is also amazing to observe the downplaying of consumer welfare by agency heads, given that, since 1979 (in Reiter v. Sonotone), the U.S. Supreme Court has described antitrust as a “consumer welfare prescription.” Unless there is fundamental change in the makeup of the federal judiciary (and, in particular, the Supreme Court) in the very near future, the new unconventional theories are likely to fail—and fail badly—when tested in court. 

Bringing new sorts of cases to test enforcement boundaries is, of course, an entirely defensible role for U.S. antitrust leadership. But can the same thing be said for bringing “non-boundary” cases based on theories that would have been deemed far beyond the pale by both Republican and Democratic officials just a few years ago? Buckle up: it looks as if we are going to find out. 

The practice of so-called “self-preferencing” has come to embody the zeitgeist of competition policy for digital markets, as legislative initiatives are undertaken in jurisdictions around the world that to seek, in various ways, to constrain large digital platforms from granting favorable treatment to their own goods and services. The core concern cited by policymakers is that gatekeepers may abuse their dual role—as both an intermediary and a trader operating on the platform—to pursue a strategy of biased intermediation that entrenches their power in core markets (defensive leveraging) and extends it to associated markets (offensive leveraging).

In addition to active interventions by lawmakers, self-preferencing has also emerged as a new theory of harm before European courts and antitrust authorities. Should antitrust enforcers be allowed to pursue such a theory, they would gain significant leeway to bypass the legal standards and evidentiary burdens traditionally required to prove that a given business practice is anticompetitive. This should be of particular concern, given the broad range of practices and types of exclusionary behavior that could be characterized as self-preferencing—only some of which may, in some specific contexts, include exploitative or anticompetitive elements.

In a new working paper for the International Center for Law & Economics (ICLE), I provide an overview of the relevant traditional antitrust theories of harm, as well as the emerging case law, to analyze whether and to what extent self-preferencing should be considered a new standalone offense under EU competition law. The experience to date in European case law suggests that courts have been able to address platforms’ self-preferencing practices under existing theories of harm, and that it may not be sufficiently novel to constitute a standalone theory of harm.

European Case Law on Self-Preferencing

Practices by digital platforms that might be deemed self-preferencing first garnered significant attention from European competition enforcers with the European Commission’s Google Shopping investigation, which examined whether the search engine’s results pages positioned and displayed its own comparison-shopping service more favorably than the websites of rival comparison-shopping services. According to the Commission’s findings, Google’s conduct fell outside the scope of competition on the merits and could have the effect of extending Google’s dominant position in the national markets for general Internet search into adjacent national markets for comparison-shopping services, in addition to protecting Google’s dominance in its core search market.

Rather than explicitly posit that self-preferencing (a term the Commission did not use) constituted a new theory of harm, the Google Shopping ruling described the conduct as belonging to the well-known category of “leveraging.” The Commission therefore did not need to propagate a new legal test, as it held that the conduct fell under a well-established form of abuse. The case did, however, spur debate over whether the legal tests the Commission did apply effectively imposed on Google a principle of equal treatment of rival comparison-shopping services.

But it should be noted that conduct similar to that alleged in the Google Shopping investigation actually came before the High Court of England and Wales several months earlier, this time in a dispute between Google and Streetmap. At issue in that case was favorable search results Google granted to its own maps, rather than to competing online maps. The UK Court held, however, that the complaint should have been appropriately characterized as an allegation of discrimination; it further found that Google’s conduct did not constitute anticompetitive foreclosure. A similar result was reached in May 2020 by the Amsterdam Court of Appeal in the Funda case.  

Conversely, in June 2021, the French Competition Authority (AdlC) followed the European Commission into investigating Google’s practices in the digital-advertising sector. Like the Commission, the AdlC did not explicitly refer to self-preferencing, instead describing the conduct as “favoring.”

Given this background and the proliferation of approaches taken by courts and enforcers to address similar conduct, there was significant anticipation for the judgment that the European General Court would ultimately render in the appeal of the Google Shopping ruling. While the General Court upheld the Commission’s decision, it framed self-preferencing as a discriminatory abuse. Further, the Court outlined four criteria that differentiated Google’s self-preferencing from competition on the merits.

Specifically, the Court highlighted the “universal vocation” of Google’s search engine—that it is open to all users and designed to index results containing any possible content; the “superdominant” position that Google holds in the market for general Internet search; the high barriers to entry in the market for general search services; and what the Court deemed Google’s “abnormal” conduct—behaving in a way that defied expectations, given a search engine’s business model, and that changed after the company launched its comparison-shopping service.

While the precise contours of what the Court might consider discriminatory abuse aren’t yet clear, the decision’s listed criteria appear to be narrow in scope. This stands at odds with the much broader application of self-preferencing as a standalone abuse, both by the European Commission itself and by some national competition authorities (NCAs).

Indeed, just a few weeks after the General Court’s ruling, the Italian Competition Authority (AGCM) handed down a mammoth fine against Amazon over preferential treatment granted to third-party sellers who use the company’s own logistics and delivery services. Rather than reflecting the qualified set of criteria laid out by the General Court, the Italian decision was clearly inspired by the Commission’s approach in Google Shopping. Where the Commission described self-preferencing as a new form of leveraging abuse, AGCM characterized Amazon’s practices as tying.

Self-preferencing has also been raised as a potential abuse in the context of data and information practices. In November 2020, the European Commission sent Amazon a statement of objections detailing its preliminary view that the company had infringed antitrust rules by making systematic use of non-public business data, gathered from independent retailers who sell on Amazon’s marketplace, to advantage the company’s own retail business. (Amazon responded with a set of commitments currently under review by the Commission.)

Both the Commission and the U.K. Competition and Markets Authority have lodged similar allegations against Facebook over data gathered from advertisers and then used to compete with those advertisers in markets in which Facebook is active, such as classified ads. The Commission’s antitrust proceeding against Apple over its App Store rules likewise highlights concerns that the company may use its platform position to obtain valuable data about the activities and offers of its competitors, while competing developers may be denied access to important customer data.

These enforcement actions brought by NCAs and the Commission appear at odds with the more bounded criteria set out by the General Court in Google Shopping, and raise tremendous uncertainty regarding the scope and definition of the alleged new theory of harm.

Self-Preferencing, Platform Neutrality, and the Limits of Antitrust Law

The growing tendency to invoke self-preferencing as a standalone theory of antitrust harm could serve two significant goals for European competition enforcers. As mentioned earlier, it offers a convenient shortcut that could allow enforcers to skip the legal standards and evidentiary burdens traditionally required to prove anticompetitive behavior. Moreover, it can function, in practice, as a means to impose a neutrality regime on digital gatekeepers, with the aims of both ensuring a level playing field among competitors and neutralizing the potential conflicts of interests implicated by dual-mode intermediation.

The dual roles performed by some platforms continue to fuel the never-ending debate over vertical integration, as well as related concerns that, by giving preferential treatment to its own products and services, an integrated provider may leverage its dominance in one market to related markets. From this perspective, self-preferencing is an inevitable byproduct of the emergence of ecosystems.

However, as the Australian Competition and Consumer Commission has recognized, self-preferencing conduct is “often benign.” Furthermore, the total value generated by an ecosystem depends on the activities of independent complementors. Those activities are not completely under the platform’s control, although the platform is required to establish and maintain the governance structures regulating access to and interactions around that ecosystem.

Given this reality, a complete ban on self-preferencing may call the very existence of ecosystems into question, challenging their design and monetization strategies. Preferential treatment can take many different forms with many different potential effects, all stemming from platforms’ many different business models. This counsels for a differentiated, case-by-case, and effects-based approach to assessing the alleged competitive harms of self-preferencing.

Antitrust law does not impose on platforms a general duty to ensure neutrality by sharing their competitive advantages with rivals. Moreover, possessing a competitive advantage does not automatically equal an anticompetitive effect. As the European Court of Justice recently stated in Servizio Elettrico Nazionale, competition law is not intended to protect the competitive structure of the market, but rather to protect consumer welfare. Accordingly, not every exclusionary effect is detrimental to competition. Distinctions must be drawn between foreclosure and anticompetitive foreclosure, as only the latter may be penalized under antitrust.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Philip K Dick’s novella “The Minority Report” describes a futuristic world without crime. This state of the world is achieved thanks to the visions of three mutants—so-called “precogs”—who predict crimes before they occur, thereby enabling law enforcement to incarcerate people for crimes they were going to commit.

This utopia unravels when the protagonist—the head of the police Precrime division, who is himself predicted to commit a murder—learns that the precogs often produce “minority reports”: i.e., visions of the future that differ from one another. The existence of these alternate potential futures undermine the very foundations of Precrime. For every crime that is averted, an innocent person may be convicted of a crime they were not going to commit.

You might be wondering what any of this has to do with antitrust and last week’s Truth on the Market symposium on Antitrust’s Uncertain Future. Given the recent adoption of the European Union’s Digital Markets Act (DMA) and the prospect that Congress could soon vote on the American Innovation and Choice Online Act (AICOA), we asked contributors to write short pieces describing what the future might look like—for better or worse—under these digital-market regulations, or in their absence.

The resulting blog posts offer a “minority report” of sorts. Together, they dispel the myth that these regulations would necessarily give rise to a brighter future of intensified competition, innovation, and improved online services. To the contrary, our contributors cautioned—albeit with varying degrees of severity—that these regulations create risks that policymakers should not ignore.

The Majority Report

If policymakers like European Commissioner for Competition Margrethe Vestager, Federal Trade Commission Chair Lina Khan, and Sen. Amy Klobuchar (D-Minn.) are to be believed, a combination of tougher regulations and heightened antitrust enforcement is the only way to revitalize competition in digital markets. As Klobuchar argues on her website:

To ensure our future economic prosperity, America must confront its monopoly power problem and restore competitive markets. … [W]e must update our antitrust laws for the twenty-first century to protect the competitive markets that are the lifeblood of our economy.

Speaking of the recently passed DMA, Vestager suggested the regulation could spark an economic boom, drawing parallels with the Renaissance:

The work we put into preserving and strengthening our Single Market will equip us with the means to show the world that our path based on open trade and fair competition is truly better. After all, Bruges did not become great by conquest and ruthless occupation. It became great through commerce and industry.

Several antitrust scholars have been similarly bullish about the likely benefits of such regulations. For instance, Fiona Scott Morton, Steven Salop, and David Dinielli write that:

It is an appropriate expression of democracy for Congress to enact pro-competitive statutes to maintain the vibrancy of the online economy and allow for continued innovation that benefits non-platform businesses as well as end users.

In short, there is a widespread belief that such regulations would make the online world more competitive and innovative, to the benefit of consumers.

The Minority Reports

To varying degrees, the responses to our symposium suggest proponents of such regulations may be falling prey to what Harold Demsetz called “the nirvana fallacy.” In other words, it is wrong to assume that the resulting enforcement would be costless and painless for consumers.

Even the symposium’s pieces belonging to the literary realms of sci-fi and poetry shed a powerful light on the deep-seated problems that underlie contemporary efforts to make online industries “more contestable and fair.” As several scholars highlighted, such regulations may prevent firms from designing new and improved products, or from maintaining existing ones. Among my favorite passages was this excerpt from Daniel Crane’s fictional piece about a software engineer in Helsinki trying to integrate restaurant and hotel ratings into a vertical search engine:

“We’ve been watching how you’re coding the new walking tour search vertical. It seems that you are designing it to give preference to restaurants, cafès, and hotels that have been highly rated by the Tourism Board.”

 “Yes, that’s right. Restaurants, cafès, and hotels that have been rated by the Tourism Board are cleaner, safer, and more convenient. That’s why they have been rated.”

 “But you are forgetting that the Tourism Board is one of our investors. This will be considered self-preferencing.”

Along similar lines, Thom Lambert observed that:

Even if a covered platform could establish that a challenged practice would maintain or substantially enhance the platform’s core functionality, it would also have to prove that the conduct was “narrowly tailored” and “reasonably necessary” to achieve the desired end, and, for many behaviors, the “le[ast] discriminatory means” of doing so. That is a remarkably heavy burden…. It is likely, then, that AICOA would break existing products and services and discourage future innovation.

Several of our contributors voiced fears that bans on self-preferencing would prevent platforms from acquiring startups that complement their core businesses, thus making it harder to launch new services and deterring startup investment. For instance, in my alternate history post, I argued that such bans might have prevented Google’s purchase of Android, thus reducing competition in the mobile phone industry.

A second important objection was that self-preferencing bans are hard to apply consistently. Policymakers would notably have to draw lines between the different components that make up an economic good. As Ramsi Woodcock wrote in a poem:

You: The meaning of component,
We can always redefine.
From batteries to molecules,
We can draw most any line.

This lack of legal certainty will prove hard to resolve. Geoffrey Manne noted that regulatory guidelines were unlikely to be helpful in this regard:

Indeed, while laws are sometimes purposefully vague—operating as standards rather than prescriptive rules—to allow for more flexibility, the concepts introduced by AICOA don’t even offer any cognizable standards suitable for fine-tuning.

Alden Abbott was similarly concerned about the vague language that underpins AICOA:

There is, however, one inescapable reality—as night follows day, passage of AICOA would usher in an extended period of costly litigation over the meaning of a host of AICOA terms. … The history of antitrust illustrates the difficulties inherent in clarifying the meaning of novel federal statutory language. It was not until 21 years after passage of the Sherman Antitrust Act that the Supreme Court held that Section 1 of the act’s prohibition on contracts, combinations, and conspiracies “in restraint of trade” only covered unreasonable restraints of trade.

Our contributors also argued that bans on self-preferencing and interoperability mandates might be detrimental to users’ online experience. Lazar Radic and Friso Bostoen both wrote pieces taking readers through a typical day in worlds where self-preferencing is prohibited. Neither was particularly utopian. In his satirical piece, Lazar Radic imagined an online shopping experience where all products are given equal display:

“Time to do my part,” I sigh. My eyes—trained by years of practice—dart from left to right and from right to left, carefully scrutinizing each coffee capsule on offer for an equal number of seconds. … After 13 brands and at least as many flavors, I select the platforms own brand, “Basic”… and then answer a series of questions to make sure I have actually given competitors’ products fair consideration.

Closer to the world we live in, Friso Bostoen described how going through a succession of choice screens—a likely outcome of regulations such as AICOA and the DMA—would be tiresome for consumers:

A new fee structure… God, save me from having to tap ‘learn more’ to find out what that means. I’ve had to learn more about the app ecosystem than is good for me already.

Finally, our symposium highlighted several other ways in which poorly designed online regulations may harm consumers. Stephen Dnes concluded that mandatory data-sharing regimes will deter companies from producing valuable data in the first place. Julie Carlson argued that prohibiting platforms from preferencing their own goods would disproportionately harm low-income consumers. And Aurelien Portuese surmised that, if passed into law, AICOA would dampen firms’ incentives to invest in new services. Last, but not least, in a co-authored piece, Filip Lubinski and Lazar Radic joked that self-preferencing bans could be extended to the offline world:

The success of AICOA has opened our eyes to an even more ancient and perverse evil: self-preferencing in offline markets. It revealed to us that—for centuries, if not millennia—companies in various industries—from togas to wine, from cosmetics to insurance—had, in fact, always preferred their own initiatives over those of their rivals!

The Problems of Online Precrime

Online regulations like AICOA and the DMA mark a radical shift from existing antitrust laws. They move competition policy from a paradigm of ex post enforcement, based upon a detailed case-by-case analysis of effects, to one of ex ante prohibitions.

Despite obvious and superficial differences, there are clear parallels between this new paradigm and the world of “The Minority Report: firms would be punished for behavior that has not yet transpired or is not proven to harm consumers.

This might be fine if we knew for certain that the prohibited conduct would harm consumers (i.e., if there were no “minority reports,” to use our previous analogy). But every entry in our symposium suggests things are not that simple. There are a wide range of outcomes and potential harms associated with the regulation of digital markets. This calls for a more calibrated approach to digital-competition policy, as opposed to the precrime of AICOA and the DMA.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Things are heating up in the antitrust world. There is considerable pressure to pass the American Innovation and Choice Online Act (AICOA) before the congressional recess in August—a short legislative window before members of Congress shift their focus almost entirely to campaigning for the mid-term elections. While it would not be impossible to advance the bill after the August recess, it would be a steep uphill climb.

But whether it passes or not, some of the damage from AICOA may already be done. The bill has moved the antitrust dialogue that will harm innovation and consumers. In this post, I will first explain AICOA’s fundamental flaws. Next, I discuss the negative impact that the legislation is likely to have if passed, even if courts and agencies do not aggressively enforce its provisions. Finally, I show how AICOA has already provided an intellectual victory for the approach articulated in the European Union (EU)’s Digital Markets Act (DMA). It has built momentum for a dystopian regulatory framework to break up and break into U.S. superstar firms designated as “gatekeepers” at the expense of innovation and consumers.

The Unseen of AICOA

AICOA’s drafters argue that, once passed, it will deliver numerous economic benefits. Sen. Amy Klobuchar (D-Minn.)—the bill’s main sponsor—has stated that it will “ensure small businesses and entrepreneurs still have the opportunity to succeed in the digital marketplace. This bill will do just that while also providing consumers with the benefit of greater choice online.”

Section 3 of the bill would provide “business users” of the designated “covered platforms” with a wide range of entitlements. This includes preventing the covered platform from offering any services or products that a business user could provide (the so-called “self-preferencing” prohibition); allowing a business user access to the covered platform’s proprietary data; and an entitlement for business users to have “preferred placement” on a covered platform without having to use any of that platform’s services.

These entitlements would provide non-platform businesses what are effectively claims on the platform’s proprietary assets, notwithstanding the covered platform’s own investments to collect data, create services, and invent products—in short, the platform’s innovative efforts. As such, AICOA is redistributive legislation that creates the conditions for unfair competition in the name of “fair” and “open” competition. It treats the behavior of “covered platforms” differently than identical behavior by their competitors, without considering the deterrent effect such a framework will have on consumers and innovation. Thus, AICOA offers rent-seeking rivals a formidable avenue to reap considerable benefits at the expense of the innovators thanks to the weaponization of antitrust to subvert, not improve, competition.

In mandating that covered platforms make their data and proprietary assets freely available to “business users” and rivals, AICOA undermines the underpinning of free markets to pursue the misguided goal of “open markets.” The inevitable result will be the tragedy of the commons. Absent the covered platforms having the ability to benefit from their entrepreneurial endeavors, the law no longer encourages innovation. As Joseph Schumpeter seminally predicted: “perfect competition implies free entry into every industry … But perfectly free entry into a new field may make it impossible to enter it at all.”

To illustrate, if business users can freely access, say, a special status on the covered platforms’ ancillary services without having to use any of the covered platform’s services (as required under Section 3(a)(5)), then platforms are disincentivized from inventing zero-priced services, since they cannot cross-monetize these services with existing services. Similarly, if, under Section 3(a)(1) of the bill, business users can stop covered platforms from pre-installing or preferencing an app whenever they happen to offer a similar app, then covered platforms will be discouraged from investing in or creating new apps. Thus, the bill would generate a considerable deterrent effect for covered platforms to invest, invent, and innovate.

AICOA’s most detrimental consequences may not be immediately apparent; they could instead manifest in larger and broader downstream impacts that will be difficult to undo. As the 19th century French economist Frederic Bastiat wrote: “a law gives birth not only to an effect but to a series of effects. Of these effects, the first only is immediate; it manifests itself simultaneously with its cause—it is seen. The others unfold in succession—they are not seen it is well for, if they are foreseen … it follows that the bad economist pursues a small present good, which will be followed by a great evil to come, while the true economist pursues a great good to come,—at the risk of a small present evil.”

To paraphrase Bastiat, AICOA offers ill-intentioned rivals a “small present good”–i.e., unconditional access to the platforms’ proprietary assets–while society suffers the loss of a greater good–i.e., incentives to innovate and welfare gains to consumers. The logic is akin to those who advocate the abolition of intellectual-property rights: The immediate (and seen) gain is obvious, concerning the dissemination of innovation and a reduction of the price of innovation, while the subsequent (and unseen) evil remains opaque, as the destruction of the institutional premises for innovation will generate considerable long-term innovation costs.

Fundamentally, AICOA weakens the benefits of scale by pursuing vertical disintegration of the covered platforms to the benefit of short-term static competition. In the long term, however, the bill would dampen dynamic competition, ultimately harming consumer welfare and the capacity for innovation. The measure’s opportunity costs will prevent covered platforms’ innovations from benefiting other business users or consumers. They personify the “unseen,” as Bastiat put it: “[they are] always in the shadow, and who, personifying what is not seen, [are] an essential element of the problem. [They make] us understand how absurd it is to see a profit in destruction.”

The costs could well amount to hundreds of billions of dollars for the U.S. economy, even before accounting for the costs of deterred innovation. The unseen is costly, the seen is cheap.

A New Robinson-Patman Act?

Most antitrust laws are terse, vague, and old: The Sherman Act of 1890, the Federal Trade Commission Act, and the Clayton Act of 1914 deal largely in generalities, with considerable deference for courts to elaborate in a common-law tradition on the specificities of what “restraints of trade,” “monopolization,” or “unfair methods of competition” mean.

In 1936, Congress passed the Robinson-Patman Act, designed to protect competitors from the then-disruptive competition of large firms who—thanks to scale and practices such as price differentiation—upended traditional incumbents to the benefit of consumers. Passed after “Congress made no factual investigation of its own, and ignored evidence that conflicted with accepted rhetoric,” the law prohibits price differentials that would benefit buyers, and ultimately consumers, in the name of less vigorous competition from more efficient, more productive firms. Indeed, under the Robinson-Patman Act, manufacturers cannot give a bigger discount to a distributor who would pass these savings onto consumers, even if the distributor performs extra services relative to others.

Former President Gerald Ford declared in 1975 that the Robinson-Patman Act “is a leading example of [a law] which restrain[s] competition and den[ies] buyers’ substantial savings…It discourages both large and small firms from cutting prices, making it harder for them to expand into new markets and pass on to customers the cost-savings on large orders.” Despite this, calls to amend or repeal the Robinson-Patman Act—supported by, among others, competition scholars like Herbert Hovenkamp and Robert Bork—have failed.

In the 1983 Abbott decision, Justice Lewis Powell wrote: “The Robinson-Patman Act has been widely criticized, both for its effects and for the policies that it seeks to promote. Although Congress is aware of these criticisms, the Act has remained in effect for almost half a century.”

Nonetheless, the act’s enforcement dwindled, thanks to wise reactions from antitrust agencies and the courts. While it is seldom enforced today, the act continues to create considerable legal uncertainty, as it raises regulatory risks for companies who engage in behavior that may conflict with its provisions. Indeed, many of the same so-called “neo-Brandeisians” who support passage of AICOA also advocate reinvigorating Robinson-Patman. More specifically, the new FTC majority has expressed that it is eager to revitalize Robinson-Patman, even as the law protects less efficient competitors. In other words, the Robinson-Patman Act is a zombie law: dead, but still moving.

Even if the antitrust agencies and courts ultimately follow the same path of regulatory and judicial restraint on AICOA that they have on Robinson-Patman, the legal uncertainty its existence will engender will act as a powerful deterrent on disruptive competition that dynamically benefits consumers and innovation. In short, like the Robinson-Patman Act, antitrust agencies and courts will either enforce AICOA–thus, generating the law’s adverse effects on consumers and innovation–or they will refrain from enforcing AICOA–but then, the legal uncertainty shall lead to unseen, harmful effects on innovation and consumers.

For instance, the bill’s prohibition on “self-preferencing” in Section 3(a)(1) will prevent covered platforms from offering consumers new products and services that happen to compete with incumbents’ products and services. Self-preferencing often is a pro-competitive, pro-efficiency practice that companies widely adopt—a reality that AICOA seems to ignore.

Would AICOA prevent, e.g., Apple from offering a bundled subscription to Apple One, which includes Apple Music, so that the company can effectively compete with incumbents like Spotify? As with Robinson-Patman, antitrust agencies and courts will have to choose whether to enforce a productivity-decreasing law, or to ignore congressional intent but, in the process, generate significant legal uncertainties.

Judge Bork once wrote that Robinson-Patman was “antitrust’s least glorious hour” because, rather than improving competition and innovation, it reduced competition from firms who happen to be more productive, innovative, and efficient than their rivals. The law infamously protected inefficient competitors rather than competition. But from the perspective of legislative history perspective, AICOA may be antitrust’s new “least glorious hour.” If adopted, it will adversely affect innovation and consumers, as opportunistic rivals will be able to prevent cost-saving practices by the covered platforms.

As with Robinson-Patman, calls to amend or repeal AICOA may follow its passage. But Robinson-Patman Act illustrates the path dependency of bad antitrust laws. However costly and damaging, AICOA would likely stay in place, with regular calls for either stronger or weaker enforcement, depending on whether the momentum shifts from populist antitrust or antitrust more consistent with dynamic competition.

Victory of the Brussels Effect

The future of AICOA does not bode well for markets, either from a historical perspective or from a comparative-law perspective. The EU’s DMA similarly targets a few large tech platforms but it is broader, harsher, and swifter. In the competition between these two examples of self-inflicted techlash, AICOA will pale in comparison with the DMA. Covered platforms will be forced to align with the DMA’s obligations and prohibitions.

Consequently, AICOA is a victory of the DMA and of the Brussels effect in general. AICOA effectively crowns the DMA as the all-encompassing regulatory assault on digital gatekeepers. While members of Congress have introduced numerous antitrust bills aimed at targeting gatekeepers, the DMA is the one-stop-shop regulation that encompasses multiple antitrust bills and imposes broader prohibitions and stronger obligations on gatekeepers. In other words, the DMA outcompetes AICOA.

Commentators seldom lament the extraterritorial impact of European regulations. Regarding regulating digital gatekeepers, U.S. officials should have pushed back against the innovation-stifling, welfare-decreasing effects of the DMA on U.S. tech companies, in particular, and on U.S. technological innovation, in general. To be fair, a few U.S. officials, such as Commerce Secretary Gina Raimundo, did voice opposition to the DMA. Indeed, well-aware of the DMA’s protectionist intent and its potential to break up and break into tech platforms, Raimundo expressed concerns that antitrust should not be about protecting competitors and deterring innovation but rather about protecting the process of competition, however disruptive may be.

The influential neo-Brandeisians and radical antitrust reformers, however, lashed out at Raimundo and effectively shamed the Biden administration into embracing the DMA (and its sister regulation, AICOA). Brussels did not have to exert its regulatory overreach; the U.S. administration happily imports and emulates European overregulation. There is no better way for European officials to see their dreams come true: a techlash against U.S. digital platforms that enjoys the support of local officials.

In that regard, AICOA has already played a significant role in shaping the intellectual mood in Washington and in altering the course of U.S. antitrust. Members of Congress designed AICOA along the lines pioneered by the DMA. Sen. Klobuchar has argued that America should emulate European competition policy regarding tech platforms. Lina Khan, now chair of the FTC, co-authored the U.S. House Antitrust Subcommittee report, which recommended adopting the European concept of “abuse of dominant position” in U.S. antitrust. In her current position, Khan now praises the DMA. Tim Wu, competition counsel for the White House, has praised European competition policy and officials. Indeed, the neo-Brandeisians’ have not only praised the European Commission’s fines against U.S. tech platforms (despite early criticisms from former President Barack Obama) but have more dramatically called for the United States to imitate the European regulatory framework.

In this regulatory race to inefficiency, the standard is set in Brussels with the blessings of U.S. officials. Not even the precedent set by the EU’s General Data Protection Regulation (GDPR) fully captures the effects the DMA will have. Privacy laws passed by U.S. states’ privacy have mostly reacted to the reality of the GDPR. With AICOA, Congress is proactively anticipating, emulating, and welcoming the DMA before it has even been adopted. The intellectual and policy shift is historical, and so is the policy error.

AICOA and the Boulevard of Broken Dreams

AICOA is a failure similar to the Robinson-Patman Act and a victory for the Brussels effect and the DMA. Consumers will be the collateral damages, and the unseen effects on innovation will take years before they materialize. Calls for amendments and repeals of AICOA are likely to fail, so that the inevitable costs will forever bear upon consumers and innovation dynamics.

AICOA illustrates the neo-Brandeisian opposition to large innovative companies. Joseph Schumpeter warned against such hostility and its effect on disincentivizing entrepreneurs to innovate when he wrote:

Faced by the increasing hostility of the environment and by the legislative, administrative, and judicial practice born of that hostility, entrepreneurs and capitalists—in fact the whole stratum that accepts the bourgeois scheme of life—will eventually cease to function. Their standard aims are rapidly becoming unattainable, their efforts futile.

President William Howard Taft once said, “the world is not going to be saved by legislation.” AICOA will not save antitrust, nor will consumers. To paraphrase Schumpeter, the bill’s drafters “walked into our future as we walked into the war, blindfolded.” AICOA’s intentions to deliver greater competition, a fairer marketplace, greater consumer choice, and more consumer benefits will ultimately scatter across the boulevard of broken dreams.

The Baron de Montesquieu once wrote that legislators should only change laws with a “trembling hand”:

It is sometimes necessary to change certain laws. But the case is rare, and when it happens, they should be touched only with a trembling hand: such solemnities should be observed, and such precautions are taken that the people will naturally conclude that the laws are indeed sacred since it takes so many formalities to abrogate them.

AICOA’s drafters had a clumsy hand, coupled with what Friedrich Hayek would call “a pretense of knowledge.” They were certain to do social good and incapable of thinking of doing social harm. The future will remember AICOA as the new antitrust’s least glorious hour, where consumers and innovation were sacrificed on the altar of a revitalized populist view of antitrust.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

In Free to Choose, Milton Friedman famously noted that there are four ways to spend money[1]:

  1. Spending your own money on yourself. For example, buying groceries or lunch. There is a strong incentive to economize and to get full value.
  2. Spending your own money on someone else. For example, buying a gift for another. There is a strong incentive to economize, but perhaps less to achieve full value from the other person’s point of view. Altruism is admirable, but it differs from value maximization, since—strictly speaking—giving cash would maximize the other’s value. Perhaps the point of a gift is that it does not amount to cash and the maximization of the other person’s welfare from their point of view.
  3. Spending someone else’s money on yourself. For example, an expensed business lunch. “Pass me the filet mignon and Chateau Lafite! Do you have one of those menus without any prices?” There is a strong incentive to get maximum utility, but there is little incentive to economize.
  4. Spending someone else’s money on someone else. For example, applying the proceeds of taxes or donations. There may be an indirect desire to see utility, but incentives for quality and cost management are often diminished.

This framework can be criticized. Altruism has a role. Not all motives are selfish. There is an important role for action to help those less fortunate, which might mean, for instance, that a charity gains more utility from category (4) (assisting the needy) than from category (3) (the charity’s holiday party). It always depends on the facts and the context. However, there is certainly a grain of truth in the observation that charity begins at home and that, in the final analysis, people are best at managing their own affairs.

How would this insight apply to data interoperability? The difficult cases of assisting the needy do not arise here: there is no serious sense in which data interoperability does, or does not, result in destitution. Thus, Friedman’s observations seem to ring true: when spending data, those whose data it is seem most likely to maximize its value. This is especially so where collection of data responds to incentives—that is, the amount of data collected and processed responds to how much control over the data is possible.

The obvious exception to this would be a case of market power. If there is a monopoly with persistent barriers to entry, then the incentive may not be to maximize total utility, and therefore to limit data handling to the extent that a higher price can be charged for the lesser amount of data that does remain available. This has arguably been seen with some data-handling rules: the “Jedi Blue” agreement on advertising bidding, Apple’s Intelligent Tracking Prevention and App Tracking Transparency, and Google’s proposed Privacy Sandbox, all restrict the ability of others to handle data. Indeed, they may fail Friedman’s framework, since they amount to the platform deciding how to spend others’ data—in this case, by not allowing them to collect and process it at all.

It should be emphasized, though, that this is a special case. It depends on market power, and existing antitrust and competition laws speak to it. The courts will decide whether cases like Daily Mail v Google and Texas et al. v Google show illegal monopolization of data flows, so as to fall within this special case of market power. Outside the United States, cases like the U.K. Competition and Markets Authority’s Google Privacy Sandbox commitments and the European Union’s proposed commitments with Amazon seek to allow others to continue to handle their data and to prevent exclusivity from arising from platform dynamics, which could happen if a large platform prevents others from deciding how to account for data they are collecting. It will be recalled that even Robert Bork thought that there was risk of market power harms from the large Microsoft Windows platform a generation ago.[2] Where market power risks are proven, there is a strong case that data exclusivity raises concerns because of an artificial barrier to entry. It would only be if the benefits of centralized data control were to outweigh the deadweight loss from data restrictions that this would be untrue (though query how well the legal processes verify this).

Yet the latest proposals go well beyond this. A broad interoperability right amounts to “open season” for spending others’ data. This makes perfect sense in the European Union, where there is no large domestic technology platform, meaning that the data is essentially owned via foreign entities (mostly, the shareholders of successful U.S. and Chinese companies). It must be very tempting to run an industrial policy on the basis that “we’ll never be Google” and thus to embrace “sharing is caring” as to others’ data.

But this would transgress the warning from Friedman: would people optimize data collection if it is open to mandatory sharing even without proof of market power? It is deeply concerning that the EU’s DATA Act is accompanied by an infographic that suggests that coffee-machine data might be subject to mandatory sharing, to allow competition in services related to the data (e.g., sales of pods; spare-parts automation). There being no monopoly in coffee machines, this simply forces vertical disintegration of data collection and handling. Why put a data-collection system into a coffee maker at all, if it is to be a common resource? Friedman’s category (4) would apply: the data is taken and spent by another. There is no guarantee that there would be sensible decision making surrounding the resource.

It will be interesting to see how common-law jurisdictions approach this issue. At the risk of stating the obvious, the polity in continental Europe differs from that in the English-speaking democracies when it comes to whether the collective, or the individual, should be in the driving seat. A close read of the UK CMA’s Google commitments is interesting, in that paragraph 30 requires no self-preferencing in data collection and requires future data-handling systems to be designed with impacts on competition in mind. No doubt the CMA is seeking to prevent data-handling exclusivity on the basis that this prevents companies from using their data collection to compete. This is far from the EU DATA Act’s position in that it is certainly not a right to handle Google’s data: it is simply a right to continue to process one’s own data.

U.S. proposals are at an earlier stage. It would seem important, as a matter of principle, not to make arbitrary decisions about vertical integration in data systems, and to identify specific market-power concerns instead, in line with common-law approaches to antitrust.

It might be very attractive to the EU to spend others’ data on their behalf, but that does not make it right. Those working on the U.S. proposals would do well to ensure that there is a meaningful market-power gate to avoid unintended consequences.

Disclaimer: The author was engaged for expert advice relating to the UK CMA’s Privacy Sandbox case on behalf of the complainant Marketers for an Open Web.


[1] Milton Friedman, Free to Choose, 1980, pp.115-119

[2] Comments at the Yale Law School conference, Robert H. Bork’s influence on Antitrust Law, Sep. 27-28, 2013.

Having earlier passed through subcommittee, the American Data Privacy and Protection Act (ADPPA) has now been cleared for floor consideration by the U.S. House Energy and Commerce Committee. Before the markup, we noted that the ADPPA mimics some of the worst flaws found in the European Union’s General Data Protection Regulation (GDPR), while creating new problems that the GDPR had avoided. Alas, the amended version of the legislation approved by the committee not only failed to correct those flaws, but in some cases it actually undid some of the welcome corrections that had been made to made to the original discussion draft.

Is Targeted Advertising ‘Strictly Necessary’?

The ADPPA’s original discussion draft classified “information identifying an individual’s online activities over time or across third party websites” in the broader category of “sensitive covered data,” for which a consumer’s expression of affirmative consent (“cookie consent”) would be required to collect or process. Perhaps noticing the questionable utility of such a rule, the bill’s sponsors removed “individual’s online activities” from the definition of “sensitive covered data” in the version of ADPPA that was ultimately introduced.

The manager’s amendment from Energy and Commerce Committee Chairman Frank Pallone (D-N.J.) reverted that change and “individual’s online activities” are once again deemed to be “sensitive covered data.” However, the marked-up version of the ADPPA doesn’t require express consent to collect sensitive covered data. In fact, it seems not to consider the possibility of user consent; firms will instead be asked to prove that their collection of sensitive data was a “strict necessity.”

The new rule for sensitive data—in Section 102(2)—is that collecting or processing such data is allowed “where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated” in Section 101(b) (though with exceptions—notably for first-party advertising and targeted advertising).

This raises the question of whether, e.g., the use of targeted advertising based on a user’s online activities is “strictly necessary” to provide or maintain Facebook’s social network? Even if the courts eventually decide, in some cases, that it is necessary, we can expect a good deal of litigation on this point. This litigation risk will impose significant burdens on providers of ad-supported online services. Moreover, it would effectively invite judges to make business decisions, a role for which they are profoundly ill-suited.

Given that the ADPPA includes the “right to opt-out of targeted advertising”—in Section 204(c)) and a special targeted advertising “permissible purpose” in Section 101(b)(17)—this implies that it must be possible for businesses to engage in targeted advertising. And if it is possible, then collecting and processing the information needed for targeted advertising—including information on an “individual’s online activities,” e.g., unique identifiers – Section 2(39)—must be capable of being “strictly necessary to provide or maintain a specific product or service requested by the individual.” (Alternatively, it could have been strictly necessary for one of the other permissible purposes from Section 101(b), but none of them appear to apply to collecting data for the purpose of targeted advertising).

The ADPPA itself thus provides for the possibility of targeted advertising. Therefore, there should be no reason for legal ambiguity about when collecting “individual’s online activities” is “strictly necessary to provide or maintain a specific product or service requested by the individual.” Do we want judges or other government officials to decide which ad-supported services “strictly” require targeted advertising? Choosing business models for private enterprises is hardly an appropriate role for the government. The easiest way out of this conundrum would be simply to revert back to the ill-considered extension of “sensitive covered data” in the ADPPA version that was initially introduced.

Developing New Products and Services

As noted previously, the original ADPPA discussion draft allowed first-party use of personal data to “provide or maintain a specific product or service requested by an individual” (Section 101(a)(1)). What about using the data to develop new products and services? Can a business even request user consent for that? Under the GDPR, that is possible. Under the ADPPA, it may not be.

The general limitation on data use (“provide or maintain a specific product or service requested by an individual”) was retained from the ADPPA original discussion in the version approved by the committee. As originally introduced, the bill included an exception that could have partially addressed the concern in Section 101(b)(2) (emphasis added):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to perform system maintenance or diagnostics, to maintain a product or service for which such data was collected, to conduct internal research or analytics, to improve a product or service for which such data was collected …

Arguably, developing new products and services largely involves “internal research or analytics,” which would be covered under this exception. If the business later wanted to invite users of an old service to use a new service, the business could contact them based on a separate exception for first-party marketing and advertising (Section 101(b)(11) of the introduced bill).

This welcome development was reversed in the manager’s amendment. The new text of the exception (now Section 101(b)(2)(C)) is narrower in a key way (emphasis added): “to conduct internal research or analytics to improve a product or service for which such data was collected.” Hence, it still looks like businesses will find it difficult to use first-party data to develop new products or services.

‘De-Identified Data’ Remains Unclear

Our earlier analysis noted significant confusion in the ADPPA’s concept of “de-identified data.” Neither the introduced version nor the markup amendments addressed those concerns, so it seems worthwhile to repeat and update the criticism here. The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. In the marked-up version, the definition covers: “information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to nonpersonal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” in Section 2(12) of the marked-up version:

  1. To take “reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual”;
  2. To publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and
    1. to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual;”
  3. To “contractually obligate[] any person or entity that receives the information from the covered entity or service provider” to comply with all of the same rules and to include such an obligation “in all subsequent instances for which the data may be received.”

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here, but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification.

Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is to effectively impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Fundamental Issues with Enforcement

One of the most important problems with the ADPPA is its enforcement provisions. Most notably, the private right of action creates pernicious incentives for excessive litigation by providing for both compensatory damages and open-ended injunctive relief. Small businesses have a right to cure before damages can be sought, but many larger firms are not given a similar entitlement. Given such open-ended provisions as whether using web-browsing behavior is “strictly necessary” to improve a product or service, the litigation incentives become obvious. At the very least, there should be a general opportunity to cure, particularly given the broad restrictions placed on essentially all data use.

The bill also creates multiple overlapping power centers for enforcement (as we have previously noted):

The bill carves out numerous categories of state law that would be excluded from pre-emption… as well as several specific state laws that would be explicitly excluded, including Illinois’ Genetic Information Privacy Act and elements of the California Consumer Privacy Act. These broad carve-outs practically ensure that ADPPA will not create a uniform and workable system, and could potentially render the entire pre-emption section a dead letter. As written, it offers the worst of both worlds: a very strict federal baseline that also permits states to experiment with additional data-privacy laws.

Unfortunately, the marked-up version appears to double down on these problems. For example, the bill pre-empts the Federal Communication Commission (FCC) from enforcing sections 222, 338(i), and 631 of the Communications Act, which pertain to privacy and data security. An amendment was offered that would have pre-empted the FCC from enforcing any provisions of the Communications Act (e.g., sections 201 and 202) for data-security and privacy purposes, but it was withdrawn. Keeping two federal regulators on the beat for a single subject area creates an inefficient regime. The FCC should be completely pre-empted from regulating privacy issues for covered entities.

The amended bill also includes an ambiguous provision that appears to serve as a partial carveout for enforcement by the California Privacy Protection Agency (CCPA). Some members of the California delegation—notably, committee members Anna Eshoo and Doris Matsui (both D-Calif.)—have expressed concern that the bill would pre-empt California’s own California Privacy Rights Act. A proposed amendment by Eshoo to clarify that the bill was merely a federal “floor” and that state laws may go beyond ADPPA’s requirements failed in a 48-8 roll call vote. However, the marked-up version of the legislation does explicitly specify that the CPPA “may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act.” How courts might interpret this language should the CPPA seek to enforce provisions of the CCPA that otherwise conflict with the ADPPA is unclear, thus magnifying the problem of compliance with multiple regulators.

Conclusion

As originally conceived, the basic conceptual structure of the ADPPA was, to a very significant extent, both confused and confusing. Not much, if anything, has since improved—especially in the marked-up version that regressed the ADPPA to some of the notably bad features of the original discussion draft. The rules on de-identified data are also very puzzling: their effect contradicts the basic principle of data minimization that the ADPPA purports to uphold. Those examples strongly suggest that the ADPPA is still far from being a properly considered candidate for a comprehensive federal privacy legislation.

Winter in Helsinki

Dan Crane —  25 July 2022

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Jouko Hiltunen gazed out the window into the midday twilight. Eight stories down, across the plaza and promenade, the Helsinki harbor was already blanketed under a dusting of snow. By Christmas, the ice would be thick enough for walking out to the castle at Suomenlinna.

Jouko turned back to his computer screen. His fingers found the keys. At once, lines of code began spilling from the keyboard.

The desk phone rang. Sanna, who occupied the adjacent cubicle, arched her eyebrows. “Legal again?”

Jouko nodded. Without answering the phone, he got up and walked down three flights of stairs. The usual group was assembled in Partanen’s office: the woman in the dour gray suit who looked like an osprey, the fat man from Brussels who made them speak in English, and Partanen, the general counsel.

By habit, Jouko entered and stood behind a chair. Partanen nodded curtly. “We have an issue, Hiltunen. Again.”

“What now?”

“We’ve been watching how you’re coding the new walking tour search vertical. It seems that you are designing it to give preference to restaurants, cafès, and hotels that have been highly rated by the Tourism Board.”

“Yes, that’s right. Restaurants, cafès, and hotels that have been rated by the Tourism Board are cleaner, safer, and more convenient. That’s why they have been rated.”

“But you are forgetting that the Tourism Board is one of our investors. This will be considered self-preferencing.”

“But . . .”

“Listen, Hiltunen. We aren’t here to argue about this. Maybe it will, maybe it won’t be considered self-preferencing, but our company won’t take that risk. Do you understand?”

 “No.”

 “Then let me explain it . . .”

 But Jouko had already left. When he returned to his desk, Sanna was watching him. “Everything OK?” she asked.

Jouko shrugged. He started typing again, but more slowly than before. An hour later, the phone rang again. This time, Sanna only raised an eyebrow. Jouko gave half a nod and ambled downstairs.

“You are making it worse,” said Partanen. The osprey woman scowled and raked her fingernails across the desk.

“How am I making it worse? I did what you said and eliminated search results defaulting to rated establishments.”

“Yes, but you added a toggle for users to be shown only rated establishments.”

“Only if they decide to be shown only rated establishments. I’m giving them a choice.”

“Choice? What does choice have to do with it? Everyone who uses our search engine is choosing—” Partanen made rabbit ears in the air – “but we have a responsibility not to impede competition. If you give them a suggestive choice” – again, rabbit ears – “that will be considered self-preferencing?”

“Really?”

“Well, maybe it will and maybe it won’t, but the company won’t take the risk.”

When Jouko returned to his desk, Sanna averted her eyes. As he sat motionless behind his keyboard, hands folded in his lap, she occasionally shot him concerned glances.

The darkness outside was nearly complete when the phone rang again. Jouko let it go to voicemail and waited a long time before rising and walking wearily downstairs.

“What now? I haven’t done anything.”

“We’ve been talking and have a new idea. It would be better if you blocked from the search results any restaurants or hotels that have been rated by the Board of Tourism. That way, there is no chance that we will be accused of self-preferencing.”

“Or that people will end up in a safe, clean, or convenient restaurant.”

“That’s not your problem, is it?”

Jouko returned to his cubicle. He did not sit down at his desk, but started putting on his coat.

“Where are you going?” asked Sanna.

“I’m going to walk out towards Suomenlinna.”

Sanna’s voice rose in alarm: “But the ice has barely formed. It won’t hold you.”

Jouko shrugged. “Maybe it will, maybe it won’t. I’ll take the risk.”

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Earlier this month, Professors Fiona Scott Morton, Steve Salop, and David Dinielli penned a letter expressing their “strong support” for the proposed American Innovation and Choice Online Act (AICOA). In the letter, the professors address criticisms of AICOA and urge its approval, despite possible imperfections.

“Perhaps this bill could be made better if we lived in a perfect world,” the professors write, “[b]ut we believe the perfect should not be the enemy of the good, especially when change is so urgently needed.”

The problem is that the professors and other supporters of AICOA have shown neither that “change is so urgently needed” nor that the proposed law is, in fact, “good.”

Is Change ‘Urgently Needed’?

With respect to the purported urgency that warrants passage of a concededly imperfect bill, the letter authors assert two points. First, they claim that AICOA’s targets—Google, Apple, Facebook, Amazon, and Microsoft (collectively, GAFAM)—“serve as the essential gatekeepers of economic, social, and political activity on the internet.” It is thus appropriate, they say, to amend the antitrust laws to do something they have never before done: saddle a handful of identified firms with special regulatory duties.

But is this oft-repeated claim about “gatekeeper” status true? The label conjures up the old Terminal Railroad case, where a group of firms controlled the only bridges over the Mississippi River at St. Louis. Freighters had no choice but to utilize their services. Do the GAFAM firms really play a similar role with respect to “economic, social, and political activity on the internet”? Hardly.

With respect to economic activity, Amazon may be a huge player, but it still accounts for only 39.5% of U.S. ecommerce sales—and far less of retail sales overall. Consumers have gobs of other ecommerce options, and so do third-party merchants, which may sell their wares using Shopify, Ebay, Walmart, Etsy, numerous other ecommerce platforms, or their own websites.

For social activity on the internet, consumers need not rely on Facebook and Instagram. They can connect with others via Snapchat, Reddit, Pinterest, TikTok, Twitter, and scores of other sites. To be sure, all these services have different niches, but the letter authors’ claim that the GAFAM firms are “essential gatekeepers” of “social… activity on the internet” is spurious.

Nor are the firms singled out by AICOA essential gatekeepers of “political activity on the internet.” The proposed law touches neither Twitter, the primary hub of political activity on the internet, nor TikTok, which is increasingly used for political messaging.

The second argument the letter authors assert in support of their claim of urgency is that “[t]he decline of antitrust enforcement in the U.S. is well known, pervasive, and has left our jurisprudence unable to protect and maintain competitive markets.” In other words, contemporary antitrust standards are anemic and have led to a lack of market competition in the United States.

The evidence for this claim, which is increasingly parroted in the press and among the punditry, is weak. Proponents primarily point to studies showing:

  1. increasing industrial concentration;
  2. higher markups on goods and services since 1980;
  3. a declining share of surplus going to labor, which could indicate monopsony power in labor markets; and
  4. a reduction in startup activity, suggesting diminished innovation. 

Examined closely, however, those studies fail to establish a domestic market power crisis.

Industrial concentration has little to do with market power in actual markets. Indeed, research suggests that, while industries may be consolidating at the national level, competition at the market (local) level is increasing, as more efficient national firms open more competitive outlets in local markets. As Geoff Manne sums up this research:

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

With respect to the evidence on markups, the claim of a significant increase in the price-cost margin depends crucially on the measure of cost. The studies suggesting an increase in margins since 1980 use the “cost of goods sold” (COGS) metric, which excludes a firm’s management and marketing costs—both of which have become an increasingly significant portion of firms’ costs. Measuring costs using the “operating expenses” (OPEX) metric, which includes management and marketing costs, reveals that public-company markups increased only modestly since the 1980s and that the increase was within historical variation. (It is also likely that increased markups since 1980 reflect firms’ more extensive use of technology and their greater regulatory burdens, both of which raise fixed costs and require higher markups over marginal cost.)

As for the declining labor share, that dynamic is occurring globally. Indeed, the decline in the labor share in the United States has been less severe than in Japan, Canada, Italy, France, Germany, China, Mexico, and Poland, suggesting that anemic U.S. antitrust enforcement is not to blame. (A reduction in the relative productivity of labor is a more likely culprit.)

Finally, the claim of reduced startup activity is unfounded. In its report on competition in digital markets, the U.S. House Judiciary Committee asserted that, since the advent of the major digital platforms:

  1. “[t]he number of new technology firms in the digital economy has declined”;
  2. “the entrepreneurship rate—the share of startups and young firms in the [high technology] industry as a whole—has also fallen significantly”; and
  3. “[u]nsurprisingly, there has also been a sharp reduction in early-stage funding for technology startups.” (pp. 46-47.)

Those claims, however, are based on cherry-picked evidence.

In support of the first two, the Judiciary Committee report cited a study based on data ending in 2011. As Benedict Evans has observed, “standard industry data shows that startup investment rounds have actually risen at least 4x since then.”

In support of the third claim, the report cited statistics from an article noting that the number and aggregate size of the very smallest venture capital deals—those under $1 million—fell between 2014 and 2018 (after growing substantially from 2008 to 2014). The Judiciary Committee report failed to note, however, the cited article’s observation that small venture deals ($1 million to $5 million) had not dropped and that larger venture deals (greater than $5 million) had grown substantially during the same time period. Nor did the report acknowledge that venture-capital funding has continued to increase since 2018.

Finally, there is also reason to think that AICOA’s passage would harm, not help, the startup environment:

AICOA doesn’t directly restrict startup acquisitions, but the activities it would restrict most certainly do dramatically affect the incentives that drive many startup acquisitions. If a platform is prohibited from engaging in cross-platform integration of acquired technologies, or if it can’t monetize its purchase by prioritizing its own technology, it may lose the motivation to make a purchase in the first place.

Despite the letter authors’ claims, neither a paucity of avenues for “economic, social, and political activity on the internet” nor the general state of market competition in the United States establishes an “urgent need” to re-write the antitrust laws to saddle a small group of firms with unprecedented legal obligations.

Is the Vagueness of AICOA’s Primary Legal Standard a Feature?

AICOA bars covered platforms from engaging in three broad classes of conduct (self-preferencing, discrimination among business users, and limiting business users’ ability to compete) where the behavior at issue would “materially harm competition.” It then forbids several specific business practices, but allows the defendant to avoid liability by proving that their use of the practice would not cause a “material harm to competition.”

Critics have argued that “material harm to competition”—a standard that is not used elsewhere in the antitrust laws—is too indeterminate to provide business planners and adjudicators with adequate guidance. The authors of the pro-AICOA letter, however, maintain that this “different language is a feature, not a bug.”

That is so, the letter authors say, because the language effectively signals to courts and policymakers that antitrust should prohibit more conduct. They explain:

To clarify to courts and policymakers that Congress wants something different (and stronger), new terminology is required. The bill’s language would open up a new space and move beyond the standards imposed by the Sherman Act, which has not effectively policed digital platforms.

Putting aside the weakness of the letter authors’ premise (i.e., that Sherman Act standards have proven ineffective), the legislative strategy they advocate—obliquely signal that you want “change” without saying what it should consist of—is irresponsible and risky.

The letter authors assert two reasons Congress should not worry about enacting a liability standard that has no settled meaning. One is that:

[t]he same judges who are called upon to render decisions under the existing, insufficient, antitrust regime, will also be called upon to render decisions under the new law. They will be the same people with the same worldview.

It is thus unlikely that “outcomes under the new law would veer drastically away from past understandings of core concepts….”

But this claim undermines the argument that a new standard is needed to get the courts to do “something different” and “move beyond the standards imposed by the Sherman Act.” If we don’t need to worry about an adverse outcome from a novel, ill-defined standard because courts are just going to continue applying the standard they’re familiar with, then what’s the point of changing the standard?

A second reason not to worry about the lack of clarity on AICOA’s key liability standard, the letter authors say, is that federal enforcers will define it:

The new law would mandate that the [Federal Trade Commission and the Antitrust Division of the U.S. Department of Justice], the two expert agencies in the area of competition, together create guidelines to help courts interpret the law. Any uncertainty about the meaning of words like ‘competition’ will be resolved in those guidelines and over time with the development of caselaw.

This is no doubt music to the ears of members of Congress, who love to get credit for “doing something” legislatively, while leaving the details to an agency so that they can avoid accountability if things turn out poorly. Indeed, the letter authors explicitly play upon legislators’ unwholesome desire for credit-sans-accountability. They emphasize that “[t]he agencies must [create and] update the guidelines periodically. Congress doesn’t have to do much of anything very specific other than approve budgets; it certainly has no obligation to enact any new laws, let alone amend them.”

AICOA does not, however, confer rulemaking authority on the agencies; it merely directs them to create and periodically update “agency enforcement guidelines” and “agency interpretations” of certain affirmative defenses. Those guidelines and interpretations would not bind courts, which would be free to interpret AICOA’s new standard differently. The letter authors presume that courts would defer to the agencies’ interpretation of the vague standard, and they probably would. But that raises other problems.

For one thing, it reduces certainty, which is likely to chill innovation. Giving the enforcement agencies de facto power to determine and redetermine what behaviors “would materially harm competition” means that the rules are never settled. Administrations differ markedly in their views about what the antitrust laws should forbid, so business planners could never be certain that a product feature or revenue model that is legal today will not be deemed to “materially harm competition” by a future administration with greater solicitude for small rivals and upstarts. Such uncertainty will hinder investment in novel products, services, and business models.

Consider, for example, Google’s investment in the Android mobile operating system. Google makes money from Android—which it licenses to device manufacturers for free—by ensuring that Google’s revenue-generating services (e.g., its search engine and browser) are strongly preferenced on Android products. One administration might believe that this is a procompetitive arrangement, as it creates a different revenue model for mobile operating systems (as opposed to Apple’s generation of revenue from hardware sales), resulting in both increased choice and lower prices for consumers. A subsequent administration might conclude that the arrangement materially harms competition by making it harder for rival search engines and web browsers to gain market share. It would make scant sense for a covered platform to make an investment like Google did with Android if its underlying business model could be upended by a new administration with de facto power to rewrite the law.

A second problem with having the enforcement agencies determine and redetermine what covered platforms may do is that it effectively transforms the agencies from law enforcers into sectoral regulators. Indeed, the letter authors agree that “the ability of expert agencies to incorporate additional protections in the guidelines” means that “the bill is not a pure antitrust law but also safeguards other benefits to consumers.” They tout that “the complementarity between consumer protection and competition can be addressed in the guidelines.”

Of course, to the extent that the enforcement guidelines address concerns besides competition, they will be less useful for interpreting AICOA’s “material harm to competition” standard; they might deem a practice suspect on non-competition grounds. Moreover, it is questionable whether creating a sectoral regulator for five widely diverse firms is a good idea. The history of sectoral regulation is littered with examples of agency capture, rent-seeking, and other public-choice concerns. At a minimum, Congress should carefully examine the potential downsides of sectoral regulation, install protections to mitigate those downsides, and explicitly establish the sectoral regulator.

Will AICOA Break Popular Products and Services?

Many popular offerings by the platforms covered by AICOA involve self-preferencing, discrimination among business users, or one of the other behaviors the bill presumptively bans. Pre-installation of iPhone apps and services like Siri, for example, involves self-preferencing or discrimination among business users of Apple’s iOS platform. But iPhone consumers value having a mobile device that offers extensive services right out of the box. Consumers love that Google’s search result for an establishment offers directions to the place, which involves the preferencing of Google Maps. And consumers positively adore Amazon Prime, which can provide free expedited delivery because Amazon conditions Prime designation on a third-party seller’s use of Amazon’s efficient, reliable “Fulfillment by Amazon” service—something Amazon could not do under AICOA.

The authors of the pro-AICOA letter insist that the law will not ban attractive product features like these. AICOA, they say:

provides a powerful defense that forecloses any thoughtful concern of this sort: conduct otherwise banned under the bill is permitted if it would ‘maintain or substantially enhance the core functionality of the covered platform.’

But the authors’ confidence that this affirmative defense will adequately protect popular offerings is misplaced. The defense is narrow and difficult to mount.

First, it immunizes only those behaviors that maintain or substantially enhance the “core” functionality of the covered platform. Courts would rightly interpret AICOA to give effect to that otherwise unnecessary word, which dictionaries define as “the central or most important part of something.” Accordingly, any self-preferencing, discrimination, or other presumptively illicit behavior that enhances a covered platform’s service but not its “central or most important” functions is not even a candidate for the defense.

Even if a covered platform could establish that a challenged practice would maintain or substantially enhance the platform’s core functionality, it would also have to prove that the conduct was “narrowly tailored” and “reasonably necessary” to achieve the desired end, and, for many behaviors, the “le[ast] discriminatory means” of doing so. That is a remarkably heavy burden, and it beggars belief to suppose that business planners considering novel offerings involving self-preferencing, discrimination, or some other presumptively illicit conduct would feel confident that they could make the required showing. It is likely, then, that AICOA would break existing products and services and discourage future innovation.

Of course, Congress could mitigate this concern by specifying that AICOA does not preclude certain things, such as pre-installed apps or consumer-friendly search results. But the legislation would then lose the support of the many interest groups who want the law to preclude various popular offerings that its text would now forbid. Unlike consumers, who are widely dispersed and difficult to organize, the groups and competitors that would benefit from things like stripped-down smartphones, map-free search results, and Prime-less Amazon are effective lobbyists.

Should the US Follow Europe?

Having responded to criticisms of AICOA, the authors of the pro-AICOA letter go on offense. They assert that enactment of the bill is needed to ensure that the United States doesn’t lose ground to Europe, both in regulatory leadership and in innovation. Observing that the European Union’s Digital Markets Act (DMA) has just become law, the authors write that:

[w]ithout [AICOA], the role of protecting competition and innovation in the digital sector outside China will be left primarily to the European Union, abrogating U.S. leadership in this sector.

Moreover, if Europe implements its DMA and the United States does not adopt AICOA, the authors claim:

the center of gravity for innovation and entrepreneurship [could] shift from the U.S. to Europe, where the DMA would offer greater protections to start ups and app developers, and even makers and artisans, against exclusionary conduct by the gatekeeper platforms.

Implicit in the argument that AICOA is needed to maintain America’s regulatory leadership is the assumption that to lead in regulatory policy is to have the most restrictive rules. The most restrictive regulator will necessarily be the “leader” in the sense that it will be the one with the most control over regulated firms. But leading in the sense of optimizing outcomes and thereby serving as a model for other jurisdictions entails crafting the best policies—those that minimize the aggregate social losses from wrongly permitting bad behavior, wrongly condemning good behavior, and determining whether conduct is allowed or forbidden (i.e., those that “minimize the sum of error and decision costs”). Rarely is the most restrictive regulatory regime the one that optimizes outcomes, and as I have elsewhere explained, the rules set forth in the DMA hardly seem calibrated to do so.

As for “innovation and entrepreneurship” in the technological arena, it would be a seismic shift indeed if the center of gravity were to migrate to Europe, which is currently home to zero of the top 20 global tech companies. (The United States hosts 12; China, eight.)

It seems implausible, though, that imposing a bunch of restrictions on large tech companies that have significant resources for innovation and are scrambling to enter each other’s markets will enhance, rather than retard, innovation. The self-preferencing bans in AICOA and DMA, for example, would prevent Apple from developing its own search engine to compete with Google, as it has apparently contemplated. Why would Apple develop its own search engine if it couldn’t preference it on iPhones and iPads? And why would Google have started its shopping service to compete with Amazon if it couldn’t preference Google Shopping in search results? And why would any platform continually improve to gain more users as it neared the thresholds for enhanced duties under DMA or AICOA? It seems more likely that the DMA/AICOA approach will hinder, rather than spur, innovation.

At the very least, wouldn’t it be prudent to wait and see whether DMA leads to a flourishing of innovation and entrepreneurship in Europe before jumping on the European bandwagon? After all, technological innovations that occur in Europe won’t be available only to Europeans. Just as Europeans benefit from innovation by U.S. firms, American consumers will be able to reap the benefits of any DMA-inspired innovation occurring in Europe. Moreover, if DMA indeed furthers innovation by making it easier for entrants to gain footing, even American technology firms could benefit from the law by launching their products in Europe. There’s no reason for the tech sector to move to Europe to take advantage of a small-business-protective European law.

In fact, the optimal outcome might be to have one jurisdiction in which major tech platforms are free to innovate, enter each other’s markets via self-preferencing, etc. (the United States, under current law) and another that is more protective of upstart businesses that use the platforms (Europe under DMA). The former jurisdiction would create favorable conditions for platform innovation and inter-platform competition; the latter might enhance innovation among businesses that rely on the platforms. Consumers in each jurisdiction, however, would benefit from innovation facilitated by the other.

It makes little sense, then, for the United States to rush to adopt European-style regulation. DMA is a radical experiment. Regulatory history suggests that the sort of restrictiveness it imposes retards, rather than furthers, innovation. But in the unlikely event that things turn out differently this time, little harm would result from waiting to see DMA’s benefits before implementing its restrictive approach. 

Does AICOA Threaten Platforms’ Ability to Moderate Content and Police Disinformation?

The authors of the pro-AICOA letter conclude by addressing the concern that AICOA “will inadvertently make content moderation difficult because some of the prohibitions could be read… to cover and therefore prohibit some varieties of content moderation” by covered platforms.

The letter authors say that a reading of AICOA to prohibit content moderation is “strained.” They maintain that the act’s requirement of “competitive harm” would prevent imposition of liability based on content moderation and that the act is “plainly not intended to cover” instances of “purported censorship.” They further contend that the risk of judicial misconstrual exists with all proposed laws and therefore should not be a sufficient reason to oppose AICOA.

Each of these points is weak. Section 3(a)(3) of AICOA makes it unlawful for a covered platform to “discriminate in the application or enforcement of the terms of service of the covered platform among similarly situated business users in a manner that would materially harm competition.” It is hardly “strained” to reason that this provision is violated when, say, Google’s YouTube selectively demonetizes a business user for content that Google deems harmful or misleading. Or when Apple removes Parler, but not every other violator of service terms, from its App Store. Such conduct could “materially harm competition” by impeding the de-platformed business’ ability to compete with its rivals.

And it is hard to say that AICOA is “plainly not intended” to forbid these acts when a key supporting senator touted the bill as a means of policing content moderation and observed during markup that it would “make some positive improvement on the problem of censorship” (i.e., content moderation) because “it would provide protections to content providers, to businesses that are discriminated against because of the content of what they produce.”

At a minimum, we should expect some state attorneys general to try to use the law to police content moderation they disfavor, and the mere prospect of such legal action could chill anti-disinformation efforts and other forms of content moderation.

Of course, there’s a simple way for Congress to eliminate the risk of what the letter authors deem judicial misconstrual: It could clarify that AICOA’s prohibitions do not cover good-faith efforts to moderate content or police disinformation. Such clarification, however, would kill the bill, as several Republican legislators are supporting the act because it restricts content moderation.

The risk of judicial misconstrual with AICOA, then, is not the sort that exists with “any law, new or old,” as the letter authors contend. “Normal” misconstrual risk exists when legislators try to be clear about their intentions but, because language has its limits, some vagueness or ambiguity persists. AICOA’s architects have deliberately obscured their intentions in order to cobble together enough supporters to get the bill across the finish line.

The one thing that all AICOA supporters can agree on is that they deserve credit for “doing something” about Big Tech. If the law is construed in a way they disfavor, they can always act shocked and blame rogue courts. That’s shoddy, cynical lawmaking.

Conclusion

So, I respectfully disagree with Professors Scott Morton, Salop, and Dinielli on AICOA. There is no urgent need to pass the bill right now, especially as we are on the cusp of seeing an AICOA-like regime put to the test. The bill’s central liability standard is overly vague, and its plain terms would break popular products and services and thwart future innovation. The United States should equate regulatory leadership with the best, not the most restrictive, policies. And Congress should thoroughly debate and clarify its intentions on content moderation before enacting legislation that could upend the status quo on that important matter.

For all these reasons, Congress should reject AICOA. And for the same reasons, a future in which AICOA is adopted is extremely unlikely to resemble the Utopian world that Professors Scott Morton, Salop, and Dinielli imagine.

European Union lawmakers appear close to finalizing a number of legislative proposals that aim to reform the EU’s financial-regulation framework in response to the rise of cryptocurrencies. Prominent within the package are new anti-money laundering and “countering the financing of terrorism” rules (AML/CFT), including an extension of the so-called “travel rule.” The travel rule, which currently applies to wire transfers managed by global banks, would be extended to require crypto-asset service providers to similarly collect and make available details about the originators and beneficiaries of crypto-asset transfers.

This legislative process proceeded with unusual haste in recent months, which partially explains why legal objections to the proposals have not been adequately addressed. The resulting legislation is fundamentally flawed to such an extent that some of its key features are clearly invalid under EU primary (treaty) law and liable to be struck down by the Court of Justice of the European Union (CJEU). 

In this post, I will offer a brief overview of some of the concerns, which I also discuss in this recent Twitter thread. I focus primarily on the travel rule, which—in the light of EU primary law—constitutes a broad and indiscriminate surveillance regime for personal data. This characterization also applies to most of AML/CFT.

The CJEU, the EU’s highest court, established a number of conditions that such legally mandated invasions of privacy must satisfy in order to be valid under EU primary law (the EU Charter of Fundamental Rights). The legal consequences of invalidity are illustrated well by the Digital Rights Ireland judgment, in which the CJEU struck down an entire piece of EU legislation (the Data Retention Directive). Alternatively, the CJEU could decide to interpret EU law as if it complied with primary law, even if that is contrary to the text.

The Travel Rule in the Transfer of Funds Regulation

The EU travel rule is currently contained in the 2015 Wire Transfer Regulation (WTR). But at the end of June, EU legislators reached a likely final deal on its replacement, the Transfer of Funds Regulation (TFR; see the original proposal from July 2021). I focus here on the TFR, but much of the argument also applies to the older WTR now in force. 

The TFR imposes obligations on payment-system providers and providers of crypto-asset transfers (refer to here, collectively, as “service providers”) to collect, retain, transfer to other service providers, and—in some cases—report to state authorities:

…information on payers and payees, accompanying transfers of funds, in any currency, and the information on originators and beneficiaries, accompanying transfers of crypto-assets, for the purposes of preventing, detecting and investigating money laundering and terrorist financing, where at least one of the payment or crypto-asset service providers involved in the transfer of funds or crypto-assets is established in the Union. (Article 1 TFR)

The TFR’s scope extends to money transfers between bank accounts or other payment accounts, as well as transfers of crypto assets other than peer-to-peer transfers without the involvement of a service provider (Article 2 TFR). Hence, the scope of the TFR includes, but is not limited to, all those who send or receive bank transfers. This constitutes the vast majority of adult EU residents.

The information that service providers are obligated to collect and retain (under Articles 4, 10, 14, and 21 TFR) include data that allow for the identification of both sides of a transfer of funds (the parties’ names, as well as the address, country, official personal document number, customer identification number, or the sender’s date and place of birth) and for linking their identity with the (payment or crypto-asset) account number or crypto-asset wallet address. The TFR also obligates service providers to collect and retain additional data to verify the accuracy of the identifying information “on the basis of documents, data or information obtained from a reliable and independent source” (Articles 4(4), 7(3), 14(5), 16(2) TFR). 

The scope of the obligation to collect and retain verification data is vague and is likely to lead service providers to require their customers to provide copies of passports, national ID documents, bank or payment-account statements, and utility bills, as is the case under the WTR and the 5th AML Directive. Such data is overwhelmingly likely to go beyond information on the civil identity of customers and will often, if not almost always, allow inferring even sensitive personal data about the customer.

The data-collection and retention obligations in the TFR are general and indiscriminate. No distinction is made in TFR’s data-collection and retention provisions based on likelihood of a connection with criminal activity, except for verification data in the case of transfers of funds (an exception not applicable to crypto assets). Even, the distinction in the case of verification data for transfers of funds (“has reasonable grounds for suspecting money laundering or terrorist financing”) arguably lacks the precision required under CJEU case law.

Analogies with the CJEU’s Passenger Name Records Decision

In late June, following its established approach in similar cases, the CJEU gave its judgment in the Ligue des droits humains case, which challenged the EU and Belgian regimes on passenger name records (PNR). The CJEU decided there that the applicable EU law, the PNR Directive, is valid under EU primary law. But it reached that result by interpreting some of the directive’s provisions in ways contrary to their express language and by deciding that some national legal rules implementing the directive are invalid. Some features of the PNR regime that were challenged by the court are strikingly similar to the TFR regime.

First, just like the TFR, the PNR rules imposed a five-year data-retention period for the data of all passengers, even where there is no “objective evidence capable of establishing a risk that relates to terrorist offences or serious crime having an objective link, even if only an indirect one, with those passengers’ air travel.” The court decided that this was a disproportionate restriction of the rights to privacy and to the protection of personal data under Articles 5-7 of the EU Charter of Fundamental Rights. Instead of invalidating the relevant article of the PNR Directive, the CJEU reinterpreted it as if it only allowed for five-year retention in cases where there is evidence of a relevant connection to criminality.

Applying analogous reasoning to the TFR, which imposes an indiscriminate five-year data retention period in its Article 21, the conclusion must be that this TFR provision is invalid under Articles 7-8 of the charter. Article 21 TFR may, at minimum, need to be recast to apply only to that transaction data where there is “objective evidence capable of establishing a risk” that it is connected to serious crime. The court also considered the issue of government access to data that has already been collected. Under the CJEU’s established interpretation of the EU Charter, “it is essential that access to retained data by the competent authorities be subject to a prior review carried out either by a court or by an independent administrative body.” In the PNR regime, at least some countries (such as Belgium) assigned this role to their “passenger information units” (PIUs). The court noted that a PIU is “an authority competent for the prevention, detection, investigation and prosecution of terrorist offences and of serious crime, and that its staff members may be agents seconded from the competent authorities” (e.g. from police or intelligence authorities). But according to the court:

That requirement of independence means that that authority must be a third party in relation to the authority which requests access to the data, in order that the former is able to carry out the review, free from any external influence. In particular, in the criminal field, the requirement of independence entails that the said authority, first, should not be involved in the conduct of the criminal investigation in question and, secondly, must have a neutral stance vis-a-vis the parties to the criminal proceedings …

The CJEU decided that PIUs do not satisfy this requirement of independence and, as such, cannot decide on government access to the retained data.

The TFR (especially its Article 19 on provision of information) does not provide for prior independent review of access to retained data. To the extent that such a review is conducted by Financial Intelligence Units (FIUs) under the AML Directive, concerns arise very similar to the treatment of PIUs under the PNR regime. While Article 32 of the AML Directive requires FIUs to be independent, that doesn’t necessarily mean that they are independent in the ways required of the authority that will decide access to retained data under Articles 7-8 of the EU Charter. For example, the AML Directive does not preclude the possibility of seconding public prosecutors, police, or intelligence officers to FIUs.

It is worth noting that none of the conclusions reached by the CJEU in the PNR case are novel; they are well-grounded in established precedent. 

A General Proportionality Argument

Setting aside specific analogies with previous cases, the TFR clearly has not been accompanied by a more general and fundamental reflection on the proportionality of its basic scheme in the light of the EU Charter. A pressing question is whether the TFR’s far-reaching restrictions of the rights established in Articles 7-8 of the EU Charter (and perhaps other rights, like freedom of expression in Article 11) are strictly necessary and proportionate. 

Arguably, the AML/CFT regime—including the travel rule—are significantly more costly and more rights-restricting than potential alternatives. The basic problem is that there is no reliable data on the relative effectiveness of measures like the travel rule. Defenders of the current AML/CFT regime focus on evidence that it contributes to preventing or prosecuting some crime. But this is not the relevant question when it comes to proportionality. The relevant question is whether those measures are as effective or more effective than alternative, less costly, and more privacy-preserving alternatives. One conservative estimate holds that AML compliance costs in Europe were “120 times the amount successfully recovered from criminals’ and exceeded the estimated total of criminal funds (including funds not seized or identified).” 

The fact that the current AML/CFT regime is a de facto global standard cannot serve as a sufficient justification either, given that EU fundamental law is perfectly comfortable in rejecting non-European law-enforcement practices (see the CJEU’s decision in Schrems). The travel rule has been unquestioningly imported to EU law from U.S. law (via FATF), where the standards of constitutional protection of privacy are much different than under the EU Charter. This fact would likely be noticed by the Court of Justice in any putative challenge to the TFR or other elements of the AML/CFT regime. 

Here, I only flag the possibility of a general proportionality challenge. Much more work needs to be done to flesh it out.

Conclusion

Due to the political and resource constraints of the EU legislative process, it is possible that the legislative proposals in the financial-regulation package did not receive sufficient legal scrutiny from the perspective of their compatibility with the EU Charter of Fundamental Rights. This hypothesis would explain the presence of seemingly clear violations, such as the indiscriminate five-year data-retention period. Given that none of the proposals has, as yet, been voted into law, making the legislators aware of the problem may help to address at least some of the issues.

Legal arguments about the AML/CFT regime’s incompatibility with the EU Charter should be accompanied with concrete alternative proposals to achieve the goals of preventing and combating serious crime that, according to the best evidence, the current AML/CFT regime does ineffectively. We need more regulatory imagination. For example, one part of the solution may be to properly staff and equip government agencies tasked with prosecuting financial crime.

But it’s also possible that the proposals, including the TFR, will be adopted broadly without amendment. In that case, the main recourse available to EU citizens (or to any EU government) will be to challenge the legality of the measures before the Court of Justice.

Just three weeks after a draft version of the legislation was unveiled by congressional negotiators, the American Data Privacy and Protection Act (ADPPA) is heading to its first legislative markup, set for tomorrow morning before the U.S. House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee.

Though the bill’s legislative future remains uncertain, particularly in the U.S. Senate, it would be appropriate to check how the measure compares with, and could potentially interact with, the comprehensive data-privacy regime promulgated by the European Union’s General Data Protection Regulation (GDPR). A preliminary comparison of the two shows that the ADPPA risks adopting some of the GDPR’s flaws, while adding some entirely new problems.

A common misconception about the GDPR is that it imposed a requirement for “cookie consent” pop-ups that mar the experience of European users of the Internet. In fact, this requirement comes from a different and much older piece of EU law, the 2002 ePrivacy Directive. In most circumstances, the GDPR itself does not require express consent for cookies or other common and beneficial mechanisms to keep track of user interactions with a website. Website publishers could likely rely on one of two lawful bases for data processing outlined in Article 6 of the GDPR:

  • data processing is necessary in connection with a contractual relationship with the user, or
  • “processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (unless overridden by interests of the data subject).

For its part, the ADPPA generally adopts the “contractual necessity” basis for data processing but excludes the option to collect or process “information identifying an individual’s online activities over time or across third party websites.” The ADPPA instead classifies such information as “sensitive covered data.” It’s difficult to see what benefit users would derive from having to click that they “consent” to features that are clearly necessary for the most basic functionality, such as remaining logged in to a site or adding items to an online shopping cart. But the expected result will be many, many more popup consent queries, like those that already bedevil European users.

Using personal data to create new products

Section 101(a)(1) of the ADPPA expressly allows the use of “covered data” (personal data) to “provide or maintain a specific product or service requested by an individual.” But the legislation is murkier when it comes to the permissible uses of covered data to develop new products. This would only clearly be allowed where each data subject concerned could be asked if they “request” the specific future product. By contrast, under the GDPR, it is clear that a firm can ask for user consent to use their data to develop future products.

Moving beyond Section 101, we can look to the “general exceptions” in Section 209 of the ADPPA, specifically the exception in Section 209(a)(2)):

With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to perform system maintenance, diagnostics, maintain a product or service for which such covered data was collected, conduct internal research or analytics to improve products and services, perform inventory management or network management, or debug or repair errors that impair the functionality of a service or product for which such covered data was collected by the covered entity, except such data shall not be transferred.

While this provision mentions conducting “internal research or analytics to improve products and services,” it also refers to “a product or service for which such covered data was collected.” The concern here is that this could be interpreted as only allowing “research or analytics” in relation to existing products known to the data subject.

The road ends here for personal data that the firm collects itself. Somewhat paradoxically, the firm could more easily make the case for using data obtained from a third party. Under Section 302(b) of the ADPPA, a firm only has to ensure that it is not processing “third party data for a processing purpose inconsistent with the expectations of a reasonable individual.” Such a relatively broad “reasonable expectations” basis is not available for data collected directly by first-party covered entities.

Under the GDPR, aside from the data subject’s consent, the firm also could rely on its own “legitimate interest” as a lawful basis to process user data to develop new products. It is true, however, that due to requirements that the interests of the data controller and the data subject must be appropriately weighed, the “legitimate interest” basis is probably less popular in the EU than alternatives like consent or contractual necessity.

Developing this path in the ADPPA would arguably provide a more sensible basis for data uses like the reuse of data for new product development. This could be superior even to express consent, which faces problems like “consent fatigue.” These are unlikely to be solved by promulgating detailed rules on “affirmative consent,” as proposed in Section 2 of the ADPPA.

Problems with ‘de-identified data’

Another example of significant confusion in the ADPPA’s the basic conceptual scheme is the bill’s notion of “de-identified data.” The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. The definition covers: “information that does not identify and is not linked or reasonably linkable to an individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

  1. derived from identifiable data, but
  2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
  3. are processed by the entity that previously processed the original identifiable data.

Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to non-personal data that would otherwise not be covered by the ADPPA.

The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

The ADPPA imposes several duties on entities dealing with “de-identified data” (that is, all data that are not considered “covered” data):

  1. to take “reasonable measures to ensure that the information cannot, at any point, be used to re-identify any individual or device”;
  2. to publicly commit “in a clear and conspicuous manner—
    1. to process and transfer the information solely in a de- identified form without any reasonable means for re- identification; and
    1. to not attempt to re-identify the information with any individual or device;”
  3. to “contractually obligate[] any person or entity that receives the information from the covered entity to comply with all of the” same rules.

The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from effective attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here but ended up prohibiting a vast sphere of innocuous activity.

Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification. Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is effectively to impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

Conclusion

The basic conceptual structure of the legislation that subcommittee members will take up this week is, to a very significant extent, both confused and confusing. Perhaps in tomorrow’s markup, a more open and detailed discussion of what the drafters were trying to achieve could help to improve the scheme, as it seems that some key provisions of the current draft would lead to absurd results (e.g., those directly contrary to the principle of data minimization).

Given that the GDPR is already a well-known point of reference, including for U.S.-based companies and privacy professionals, the ADPPA may do better to re-use the best features of the GDPR’s conceptual structure while cutting its excesses. Re-inventing the wheel by proposing new concepts did not work well in this ADPPA draft.

The European Union’s Digital Markets Act (DMA) has been finalized in principle, although some legislative details are still being negotiated. Alas, our earlier worries about user privacy still have not been addressed adequately.

The key rules to examine are the DMA’s interoperability mandates. The most recent DMA text introduced a potentially very risky new kind of compulsory interoperability “of number-independent interpersonal communications services” (e.g., for services like WhatsApp). However, this obligation comes with a commendable safeguard in the form of an equivalence standard: interoperability cannot lower the current level of user security. Unfortunately, the DMA’s other interoperability provisions lack similar security safeguards.

The lack of serious consideration of security issues is perhaps best illustrated by how the DMA might actually preclude makers of web browsers from protecting their users from some of the most common criminal attacks, like phishing.

Key privacy concern: interoperability mandates

​​The original DMA proposal included several interoperability and data-portability obligations regarding the “core platform services” of platforms designated as “gatekeepers”—i.e., the largest online platforms. Those provisions were changed considerably during the legislative process. Among its other provisions, the most recent (May 11, 2022) version of the DMA includes:

  1. a prohibition on restricting users—“technically or otherwise”—from switching among and subscribing to software and services “accessed using the core platform services of the gatekeeper” (Art 6(6));
  2. an obligation for gatekeepers to allow interoperability with their operating system or virtual assistant (Art 6(7)); and
  3. an obligation “on interoperability of number-independent interpersonal communications services” (Art 7).

To varying degrees, these provisions attempt to safeguard privacy and security interests, but the first two do so in a clearly inadequate way.

First, the Article 6(6) prohibition on restricting users from using third-party software or services “accessed using the core platform services of the gatekeeper” notably applies to web services (web content) that a user can access through the gatekeeper’s web browser (e.g., Safari for iOS). (Web browsers are defined as core platform services in Art 2(2) DMA.)

Given that web content is typically not installed in the operating system, but accessed through a browser (i.e., likely “accessed using a core platform service of the gatekeeper”), earlier “side-loading” provisions (Article 6(4), which is discussed further below) would not apply here. This leads to what appears to be a significant oversight: the gatekeepers appear to be almost completely disabled from protecting their users when they use the Internet through web browsers, one of the most significant channels of privacy and security risks.

The Federal Bureau of Investigation (FBI) has identified “phishing” as one of the three top cybercrime types, based on the number of victim complaints. A successful phishing attack normally involves a user accessing a website that is impersonating a service the user trusts (e.g., an email account or corporate login). Browser developers can prevent some such attacks, e.g., by keeping “block lists” of websites known to be malicious and warning about, or even preventing, access to such sites. Prohibiting platforms from restricting their users’ access to third-party services would also prohibit this vital cybersecurity practice.

Under Art 6(4), in the case of installed third-party software, the gatekeepers can take:

…measures to ensure that third party software applications or software application stores do not endanger the integrity of the hardware or operating system provided by the gatekeeper, provided that such measures go no further than is strictly necessary and proportionate and are duly justified by the gatekeeper.

The gatekeepers can also apply:

measures and settings other than default settings, enabling end users to effectively protect security in relation to third party software applications or software application stores, provided that such measures and settings go no further than is strictly necessary and proportionate and are duly justified by the gatekeeper.

None of those safeguards, insufficient as they are—see the discussion below of Art 6(7)—are present in Art 6(6). Worse still is that the anti-circumvention rule in Art 13(6) applies here, prohibiting gatekeepers from offering “choices to the end-user in a non-neutral manner.” That is precisely what a web-browser developer does when warning users of security risks or when blocking access to websites known to be malicious—e.g., to protect users from phishing attacks.

This concern is not addressed by the general provision in Art 8(1) requiring the gatekeepers to ensure “that the implementation” of the measures under the DMA complies with the General Data Protection Regulation (GDPR), as well as “legislation on cyber security, consumer protection, product safety.”

The first concern is that this would not allow the gatekeepers to offer a higher standard of user protection than that required by the arguably weak or overly vague existing legislation. Also, given that the DMA’s rules (including future delegated legislation) are likely to be more specific—in the sense of constituting lex specialis—than EU rules on privacy and security, establishing a coherent legal interpretation that would allow gatekeepers to protect their users is likely to be unnecessarily difficult.

Second, the obligation from Art 6(7) for gatekeepers to allow interoperability with their operating system or virtual assistant only includes the first kind of a safeguard from Art 6(4), concerning the risk of compromising “the integrity of the operating system, virtual assistant or software features provided by the gatekeeper.” However, the risks from which service providers aim to protect users are by no means limited to system “integrity.” A user may be a victim of, e.g., a phishing attack that does not explicitly compromise the integrity of the software they used.

Moreover, as in Art 6(4), there is a problem with the “strictly necessary and proportionate” qualification. This standard may be too high and may push gatekeepers to offer more lax security to avoid liability for adopting measures that would be judged by European Commission and the courts as going beyond what is strictly necessary or indispensable.

The relevant recitals from the DMA preamble, instead of aiding in interpretation, add more confusion. The most notorious example is in recital 50, which states that gatekeepers “should be prevented from implementing” measures that are “strictly necessary and proportionate” to effectively protect user security “as a default setting or as pre-installation.” What possible justification can there be for prohibiting providers from setting a “strictly necessary” security measure as a default? We can hope that this manifestly bizarre provision will be corrected in the final text, together with the other issues identified above.

Finally, there is the obligation “on interoperability of number-independent interpersonal communications services” from Art 7. Here, the DMA takes a different and much better approach to safeguarding user privacy and security. Art 7(3) states that:

The level of security, including the end-to-end encryption, where applicable, that the gatekeeper provides to its own end users shall be preserved across the interoperable services.

There may be some concern that the Commission or the courts will not treat this rule with sufficient seriousness. Ensuring that user security is not compromised by interoperability may take a long time and may require excluding many third-party services that had hoped to benefit from this DMA rule. Nonetheless, EU policymakers should resist watering down the standard of equivalence in security levels, even if it renders Art 7 a dead letter for the foreseeable future.

It is also worth noting that there will be no presumption of user opt-in to any interoperability scheme (Art 7(7)-(8)), which means that third-party service providers will not be able to simply “onboard” all users from a gatekeeper’s service without their explicit consent. This is to be commended.

Conclusion

Despite some improvements (the equivalence standard in Art 7(3) DMA), the current DMA language still betrays, as I noted previously, “a policy preference for privileging uncertain and speculative competition gains at the cost of introducing new and clear dangers to information privacy and security.” Jane Bambauer of the University of Arizona Law School came to similar conclusions in her analysis of the DMA, in which she warned:

EU lawmakers should be aware that the DMA is dramatically increasing the risk that data will be mishandled. Nevertheless, even though a new scandal from the DMA’s data interoperability requirement is entirely predictable, I suspect EU regulators will evade public criticism and claim that the gatekeeping platforms are morally and financially responsible.

The DMA’s text is not yet entirely finalized. It may still be possible to extend the approach adopted in Article 7(3) to other privacy-threatening rules, especially in Article 6. Such a requirement that any third-party service providers offer at least the same level of security as the gatekeepers is eminently reasonable and is likely what the users themselves would expect. Of course, there is always a risk that a safeguard of this kind will be effectively nullified in administrative or judicial practice, but this may not be very likely, given the importance that EU courts typically attach to privacy.

Banco Central do Brasil (BCB), Brazil’s central bank, launched a new real-time payment (RTP) system in November 2020 called Pix. Evangelists at the central bank hoped that Pix would offer a low-cost alternative to existing payments systems and would entice some of the country’s tens of millions of unbanked and underbanked adults into the banking system.

A recent review of Pix, published by the Bank for International Settlements, claims that the payment system has achieved these goals and that it is a model for other jurisdictions. However, the BIS review seems to have been written with rose-tinted spectacles. This is perhaps not surprising, given that the lead author runs the division of the central bank that developed Pix. In a critique published this week, I suggest that, when seen in full color, Pix looks a lot less pretty. 

Among other things, the BIS review misconstrues the economics of payment networks. By ignoring the two-sided nature of such networks, the authors claim erroneously that payment cards incur a net economic cost. In fact, evidence shows that payment cards generate net benefits. One study put their value add to the Brazilian economy at 0.17% of GDP. 

The report also obscures the costs of the Pix system and fails to explain that, whereas private payment systems must recover their full operational cost, Pix appears to benefit from both direct and indirect subsidies. The direct subsidies come from the BCB, which incurred substantial costs in developing and promoting Pix and, unlike other central banks such as the U.S. Federal Reserve, is not required to recover all operational costs. Indirect subsidies come from the banks and other payment-service providers (PSPs), many of which have been forced by the BCB to provide Pix to their clients, even though doing so cannibalizes their other payment systems, including interchange fees earned from payment cards. 

Moreover, the BIS review mischaracterizes the role of interchange fees, which are often used to encourage participation in the payment-card network. In the case of debit cards, this often includes covering some or all of the operational costs of bank accounts. The availability of “free” bank accounts with relatively low deposit requirements offers customers incentives to open and maintain accounts. 

While the report notes that Pix has “signed up” 67% of adult Brazilians, it fails to mention that most of these were automatically enrolled by their banks, the majority of which were required by the BCB to adopt Pix. It also fails to mention that 33% of adult Brazilians have not “signed up” to Pix, nor that a recent survey found that more than 20% of adult Brazilians remain unbanked or underbanked, nor that the main reason given for not having a bank account was the cost of such accounts. Moreover, by diverting payments away from debit cards, Pix has reduced interchange fees and thereby reduced the ability of banks and other PSPs to subsidize bank accounts, which might otherwise have increased financial inclusion.  

The BIS review falsely asserts that “Big Tech” payment networks are able to establish and maintain market power. In reality, tech firms operate in highly competitive markets and have little to no market power in payment networks. Nonetheless, the report uses this claim regarding Big Tech’s alleged market power to justify imposing restrictions on the WhatsApp payment system. The irony, of course, is that by moving to prohibit the WhatsApp payment service shortly before the rollout of Pix, the BCB unfairly inhibited competition, effectively giving Pix a monopoly on RTP with the full support of the government. 

In acting as both a supplier of a payment service and the regulator of payment service providers, the BCB has a massive conflict of interest. Indeed, the BIS itself has recommended that, in cases where such conflicts might exist, it is good practice to ensure that the regulator is clearly separated from the supplier. Pix, in contrast, was developed and promoted by the same part of the bank as the payments regulator. 

Finally, the BIS report also fails to address significant security issues associated with Pix, including a dramatic rise in the number of “lightning kidnappings” in which hostages were forced to send funds to Pix addresses.