Archives For European Union

At the Jan. 26 Policy in Transition forum—the Mercatus Center at George Mason University’s second annual antitrust forum—various former and current antitrust practitioners, scholars, judges, and agency officials held forth on the near-term prospects for the neo-Brandeisian experiment undertaken in recent years by both the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ). In conjunction with the forum, Mercatus also released a policy brief on 2022’s significant antitrust developments.

Below, I summarize some of the forum’s noteworthy takeaways, followed by concluding comments on the current state of the antitrust enterprise, as reflected in forum panelists’ remarks.

Takeaways

    1. The consumer welfare standard is neither a recent nor an arbitrary antitrust-enforcement construct, and it should not be abandoned in order to promote a more “enlightened” interventionist antitrust.

George Mason University’s Donald Boudreaux emphasized in his introductory remarks that the standard goes back to Adam Smith, who noted in “The Wealth of Nations” nearly 250 years ago that the appropriate end of production is the consumer’s benefit. Moreover, American Antitrust Institute President Diana Moss, a leading proponent of more aggressive antitrust enforcement, argued in standalone remarks against abandoning the consumer welfare standard, as it is sufficiently flexible to justify a more interventionist agenda.

    1. The purported economic justifications for a far more aggressive antitrust-enforcement policy on mergers remain unconvincing.

Moss’ presentation expressed skepticism about vertical-merger efficiencies and called for more aggressive challenges to such consolidations. But Boudreaux skewered those arguments in a recent four-point rebuttal at Café Hayek. As he explains, Moss’ call for more vertical-merger enforcement ignores the fact that “no one has stronger incentives than do the owners and managers of firms to detect and achieve possible improvements in operating efficiencies – and to avoid inefficiencies.”

Moss’ complaint about chronic underenforcement mistakes by overly cautious agencies also ignores the fact that there will always be mistakes, and there is no reason to believe “that antitrust bureaucrats and courts are in a position to better predict the future [regarding which efficiencies claims will be realized] than are firm owners and managers.” Moreover, Moss provided “no substantive demonstration or evidence that vertical mergers often lead to monopolization of markets – that is, to industry structures and practices that harm consumers. And so even if vertical mergers never generate efficiencies, there is no good argument to use antitrust to police such mergers.”

And finally, Boudreaux considers Moss’ complaint that a court refused to condemn the AT&T-Time Warner merger, arguing that this does not demonstrate that antitrust enforcement is deficient:

[A]s soon as the  . . . merger proved to be inefficient, the parties themselves undid it. This merger was undone by competitive market forces and not by antitrust! (Emphasis in the original.)

    1. The agencies, however, remain adamant in arguing that merger law has been badly unenforced. As such, the new leadership plans to charge ahead and be willing to challenge more mergers based on mere market structure, paying little heed to efficiency arguments or actual showings of likely future competitive harm.

In her afternoon remarks at the forum, Principal Deputy Assistant U.S. Attorney General for Antitrust Doha Mekki highlighted five major planks of Biden administration merger enforcement going forward.

  • Clayton Act Section 7 is an incipiency statute. Thus, “[w]hen a [mere] change in market structure suggests that a firm will have an incentive to reduce competition, that should be enough [to justify a challenge].”
  • “Once we see that a merger may lead to, or increase, a firm’s market power, only in very rare circumstances should we think that a firm will not exercise that power.”
  • A structural presumption “also helps businesses conform their conduct to the law with more confidence about how the agencies will view a proposed merger or conduct.”
  • Efficiencies defenses will be given short shrift, and perhaps ignored altogether. This is because “[t]he Clayton Act does not ask whether a merger creates a more or less efficient firm—it asks about the effect of the merger on competition. The Supreme Court has never recognized efficiencies as a defense to an otherwise illegal merger.”
  • Merger settlements have often failed to preserve competition, and they will be highly disfavored. Therefore, expect a lot more court challenges to mergers than in recent decades. In short, “[w]e must be willing to litigate. . . . [W]e need to acknowledge the possibility that sometimes a court might not agree with us—and yet go to court anyway.”

Mekki’s comments suggest to me that the soon-to-be-released new draft merger guidelines may emphasize structural market-share tests, generally reject efficiencies justifications, and eschew the economic subtleties found in the current guidelines.

    1. The agencies—and the FTC, in particular—have serious institutional problems that undermine their effectiveness, and risk a loss of credibility before the courts in the near future.

In his address to the forum, former FTC Chairman Bill Kovacic lamented the inefficient limitations on reasoned FTC deliberations imposed by the Sunshine Act, which chills informal communications among commissioners. He also pointed to our peculiarly unique global status of having two enforcers with duplicative antitrust authority, and lamented the lack of policy coherence, which reflects imperfect coordination between the agencies.

Perhaps most importantly, Kovacic raised the specter of the FTC losing credibility in a possible world where Humphrey’s Executor is overturned (see here) and the commission is granted little judicial deference. He suggested taking lessons on policy planning and formulation from foreign enforcers—the United Kingdom’s Competition and Markets Authority, in particular. He also decried agency officials’ decisions to belittle prior administrations’ enforcement efforts, seeing it as detracting from the international credibility of U.S. enforcement.

    1. The FTC is embarking on a novel interventionist path at odds with decades of enforcement policy.

In luncheon remarks, Commissioner Christine S. Wilson lamented the lack of collegiality and consultation within the FTC. She warned that far-reaching rulemakings and other new interventionist initiatives may yield a backlash that undermines the institution.

Following her presentation, a panel of FTC experts discussed several aspects of the commission’s “new interventionism.” According to one panelist, the FTC’s new Section 5 Policy Statement on Unfair Methods of Competition (which ties “unfairness” to arbitrary and subjective terms) “will not survive in” (presumably, will be given no judicial deference by) the courts. Another panelist bemoaned rule-of-law problems arising from FTC actions, called for consistency in FTC and DOJ enforcement policies, and warned that the new merger guidelines will represent a “paradigm shift” that generates more business uncertainty.

The panel expressed doubts about the legal prospects for a proposed FTC rule on noncompete agreements, and noted that constitutional challenges to the agency’s authority may engender additional difficulties for the commission.

    1. The DOJ is greatly expanding its willingness to litigate, and is taking actions that may undermine its credibility in court.

Assistant U.S. Attorney General for Antitrust Jonathan Kanter has signaled a disinclination to settle, as well as an eagerness to litigate large numbers of cases (toward that end, he has hired a huge number of litigators). One panelist noted that, given this posture from the DOJ, there is a risk that judges may come to believe that the department’s litigation decisions are not well-grounded in the law and the facts. The business community may also have a reduced willingness to “buy in” to DOJ guidance.

Panelists also expressed doubts about the wisdom of DOJ bringing more “criminal Sherman Act Section 2” cases. The Sherman Act is a criminal statute, but the “beyond a reasonable doubt” standard of criminal law and Due Process concerns may arise. Panelists also warned that, if new merger guidelines are ”unsound,” they may detract from the DOJ’s credibility in federal court.

    1. International antitrust developments have introduced costly new ex ante competition-regulation and enforcement-coordination problems.

As one panelist explained, the European Union’s implementation of the new Digital Markets Act (DMA) will harmfully undermine market forces. The DMA is a form of ex ante regulation—primarily applicable to large U.S. digital platforms—that will harmfully interject bureaucrats into network planning and design. The DMA will lead to inefficiencies, market fragmentation, and harm to consumers, and will inevitably have spillover effects outside Europe.

Even worse, the DMA will not displace the application of EU antitrust law, but merely add to its burdens. Regrettably, the DMA’s ex ante approach is being imitated by many other enforcement regimes, and the U.S. government tacitly supports it. The DMA has not been included in the U.S.-EU joint competition dialogue, which risks failure. Canada and the U.K. should also be added to the dialogue.

Other International Concerns

The international panelists also noted that there is an unfortunate lack of convergence on antitrust procedures. Furthermore, different jurisdictions manifest substantial inconsistencies in their approaches to multinational merger analysis, where better coordination is needed. There is a special problem in the areas of merger review and of criminal leniency for price fixers: when multiple jurisdictions need to “sign off” on an enforcement matter, the “most restrictive” jurisdiction has an effective veto.

Finally, former Assistant U.S. Attorney General for Antitrust James Rill—perhaps the most influential promoter of the adoption of sound antitrust laws worldwide—closed the international panel with a call for enhanced transnational cooperation. He highlighted the importance of global convergence on sound antitrust procedures, emphasizing due process. He also advocated bolstering International Competition Network (ICN) and OECD Competition Committee convergence initiatives, and explained that greater transparency in agency-enforcement actions is warranted. In that regard, Rill said, ICN nongovernmental advisers should be given a greater role.

Conclusion

Taken as a whole, the forum’s various presentations painted a rather gloomy picture of the short-term prospects for sound, empirically based, economics-centric antitrust enforcement.

In the United States, the enforcement agencies are committed to far more aggressive antitrust enforcement, particularly with respect to mergers. The agencies’ new approach downplays efficiencies and they will be quick to presume broad categories of business conduct are anticompetitive, relying far less closely on case-specific economic analysis.

The outlook is also bad overseas, as European Union enforcers are poised to implement new ex ante regulation of competition by large platforms as an addition to—not a substitute for—established burdensome antitrust enforcement. Most foreign jurisdictions appear to be following the European lead, and the U.S. agencies are doing nothing to discourage them. Indeed, they appear to fully support the European approach.

The consumer welfare standard, which until recently was the stated touchstone of American antitrust enforcement—and was given at least lip service in Europe—has more or less been set aside. The one saving grace in the United States is that the federal courts may put a halt to the agencies’ overweening ambitions, but that will take years. In the meantime, consumer welfare will suffer and welfare-enhancing business conduct will be disincentivized. The EU courts also may place a minor brake on European antitrust expansionism, but that is less certain.

Recall, however, that when evils flew out of Pandora’s box, hope remained. Let us hope, then, that the proverbial worm will turn, and that new leadership—inspired by hopeful and enlightened policy advocates—will restore principled antitrust grounded in the promotion of consumer welfare.

Just before Christmas, the European Commission published a draft implementing regulation (DIR) of the Digital Markets Act (DMA), establishing procedural rules that, in the Commission’s own words, seek to bolster “legal certainty,” “due process,” and “effectiveness” under the DMA. The rights of defense laid down in the draft are, alas, anemic. In the long run, this will leave the Commission’s DMA-enforcement decisions open to challenge on procedural grounds before the Court of Justice of the European Union (CJEU).

This is a loss for due process, for third parties seeking to rely on the Commission’s decisions, and for the effectiveness of the DMA itself.

Detailed below are some of the significant problems with the DIR, as well as suggestions for how to address them. Many of these same issues have been highlighted in the comments submitted by likely gatekeepers, law firms, and academics during the open-consultation period. You can also read the brief explainer that Dirk Auer & I wrote on the DIR here.

Access to File

The DIR establishes that parties have the right to access files that the Commission used to issue preliminary findings. But if parties wish to access other documents in the Commission’s file, they will need to submit a “substantiated request.” Among the problems with this approach is that the documents cited in the Commission’s preliminary reference will be of  limited use to defendants, as they are likely to be those used to establish an infringement, and thus unlikely to be exculpatory.

Moreover, as the CJEU has stated, it should not be up to the Commission alone to decide whether to disclose documents in the file. The Commission can preclude documents unrelated to the statement of objections from the administrative procedure, but that isn’t the same as excluding documents that aren’t mentioned in the statement of objections. After all, evidence might be irrelevant for the prosecution but relevant for the defense.

Parties’ right to be heard is unnecessarily circumscribed by requiring that they must “duly substantiate why access to a specific document or part thereof is necessary to exercise its right to be heard.” A party might be hard-pressed to argue convincingly that it needs access to a document based solely on a terse and vague description in the Commission’s file. More generally, why would a document be in the Commission’s file if it is not relevant to the case? The right to be heard cannot be respected where access to information is prohibited.

Solution: The DIR should allow gatekeepers full access to the Commission’s file. This is the norm in antitrust and merger proceedings in the EU where:

undertakings or associations of undertakings that receive a Statement of Objections have the right to see all the evidence, whether it is incriminating or exonerating, in the Commission’s investigation file. [bold in original]

 There is little sense in deviating from this standard in DMA proceedings.

No Role for the Hearing Officer

The DIR does not spell out a role for the hearing officer, a particularly jarring omission given the Commission’s history of acting as “judge, jury and executioner” in competition-law proceedings (see here, here and here). Hearing officers are a staple in antitrust (here and here), as well as in trade proceedings more generally, where their role is to enhance impartiality and objectivity by, e.g., resolving disputes over access to certain documents. As Alfonso Lamadrid has noted, an obvious inference to reach is that DMA proceedings before the Commission are to be less impartial and objective.

Solution: Grant the hearing officer a role in, at the very least, resolving access-to-file and confidentiality disputes.

Cap on the Length of Responses

The DIR establishes a 50-page limit on parties’ responses to the Commission’s preliminary findings. Of course, no such cap is imposed on the Commission in issuing its preliminary findings, designation decisions, and other decisions under the DMA. This imbalance between the Commission’s and respondents’ duties plainly violates the principle of equality of arms—a fundamental element of the right to a fair trial under Article 47 of the EU Charter of Fundamental Rights.

An arbitrary page limit also means that the Commission may not take all relevant facts and evidence into account in its decisions, which will be based largely on the preliminary findings and the related response. This lays the groundwork for subsequent challenges before the courts.

Solution: Either remove the cap on responses to preliminary findings or impose a similar limit on the Commission in issuing those findings.

A ‘Succinct’ Right to Speak

The DIR does not contemplate granting parties oral hearings to explain their defense more fully. Oral hearings are particularly important in cases involving complex and technical arguments and evidence.

While the right to a fair trial does not require oral hearings to be held in every case, “refusing to hold an oral hearing may be justified only in rare cases.” Given that, under the DMA, companies can be fined as much as 20% of their worldwide turnover, these proceedings involve severe financial penalties of a criminal or quasi-criminal nature (here and here), and are thus unlikely to qualify (here).

Solution: Grant parties the ability to request an oral hearing following the preliminary findings.

Legal Uncertainty

As one commenter put it, “the document is striking for what it leaves out.”  As Dirk Auer and I point out, the DIR leaves unanswered such questions as the precise role of third parties in DMA processes; the role of the advisory committee in decision making; whether the college of commissioners or just one commissioner is the ultimate decision maker; whether national authorities will be able to access data gathered by the Commission; and whether there is a role for the European Competition Network in coordinating and allocating cases between the EU and the member states.

Granted, not all of these questions needed to be answered in the DIR (although some—like the role of third parties—arguably should have been). Still, the sooner they are resolved, the better for everyone. 

Solution: Clarify the above questions—either with the final version of the implementing regulation or soon thereafter—in a manual of procedures or best-practice guidelines, as appropriate.

Conclusion

Unless substantive changes are made, the DIR in its current form risks running afoul of a well-established line of jurisprudence highlighting the importance of fundamental rights in antitrust law, which is guaranteed to apply in DMA proceedings as well. One of these is the general principle that judicial and administrative promptness cannot be attained at the expense of parties’ right of defense (here). Ignoring this would not only result in a loss for the rights of defense in the EU, but would also drive a wedge in the effectiveness of the DMA—thereby staining the Commission’s credibility.

The €390 million fine that the Irish Data Protection Commission (DPC) levied last week against Meta marks both the latest skirmish in the ongoing regulatory war on the use of data by private firms, as well as a major blow to the ad-driven business model that underlies most online services. 

More specifically, the DPC was forced by the European Data Protection Board (EDPB) to find that Meta violated the General Data Protection Regulation (GDPR) when it relied on its contractual relationship with Facebook and Instagram users as the basis to employ user data in personalized advertising. 

Meta still has other bases on which it can argue it relies in order to make use of user data, but a larger issue is at-play: the decision’s findings both that making use of user data for personalized advertising is not “necessary” between a service and its users and that privacy regulators are in a position to make such an assessment. 

More broadly, the case also underscores that there is no consensus within the European Union on the broad interpretation of the GDPR preferred by some national regulators and the EDPB.

The DPC Decision

The core disagreement between the DPC and Meta, on the one hand, and some other EU privacy regulators, on the other, is whether it is lawful for Meta to treat the use of user data for personalized advertising as “necessary for the performance of” the contract between Meta and its users. The Irish DPC accepted Meta’s arguments that the nature of Facebook and Instagram is such that it is necessary to process personal data this way. The EDPB took the opposite approach and used its powers under the GDPR to direct the DPC to issue a decision contrary to DPC’s own determination. Notably, the DPC announced that it is considering challenging the EDPB’s involvement before the EU Court of Justice as an unlawful overreach of the board’s powers.

In the EDPB’s view, it is possible for Meta to offer Facebook and Instagram without personalized advertising. And to the extent that this is possible, Meta cannot rely on the “necessity for the performance of a contract” basis for data processing under Article 6 of the GDPR. Instead, Meta in most cases should rely on the “consent” basis, involving an explicit “yes/no” choice. In other words, Facebook and Instagram users should be explicitly asked if they consent to their data being used for personalized advertising. If they decline, then under this rationale, they would be free to continue using the service without personalized advertising (but with, e.g., contextual advertising). 

Notably, the decision does not mandate a particular contractual basis for processing, but only invalidates “contractual necessity” for personalized advertising. Indeed, Meta believes it has other avenues for continuing to process user data for personalized advertising while not depending on a “consent” basis. Of course, only time will tell if this reasoning is accepted. Nonetheless, the EDBP’s underlying animus toward the “necessity” of personalized advertising remains concerning.

What Is ‘Necessary’ for a Service?

The EDPB’s position is of a piece with a growing campaign against firms’ use of data more generally. But as in similar complaints against data use, the demonstrated harms here are overstated, while the possibility that benefits might flow from the use of data is assumed to be zero. 

How does the EDPB know that it is not necessary for Meta to rely on personalized advertising? And what does “necessity” mean in this context? According to the EDPB’s own guidelines, a business “should be able to demonstrate how the main subject-matter of the specific contract with the data subject cannot, as a matter of fact, be performed if the specific processing of the personal data in question does not occur.” Therefore, if it is possible to distinguish various “elements of a service that can in fact reasonably be performed independently of one another,” then even if some processing of personal data is necessary for some elements, this cannot be used to bundle those with other elements and create a “take it or leave it” situation for users. The EDPB stressed that:

This assessment may reveal that certain processing activities are not necessary for the individual services requested by the data subject, but rather necessary for the controller’s wider business model.

This stilted view of what counts as a “service” completely fails to acknowledge that “necessary” must mean more than merely technologically possible. Any service offering faces both technical limitations as well as economic limitations. What is technically possible to offer can also be so uneconomic in some forms as to be practically impossible. Surely, there are alternatives to personalized advertising as a means to monetize social media, but determining what those are requires a great deal of careful analysis and experimentation. Moreover, the EDPB’s suggested “contextual advertising” alternative is not obviously superior to the status quo, nor has it been demonstrated to be economically viable at scale.  

Thus, even though it does not strictly follow from the guidelines, the decision in the Meta case suggests that, in practice, the EDPB pays little attention to the economic reality of a contractual relationship between service providers and their users, instead trying to carve out an artificial, formalistic approach. It is doubtful whether the EDPB engaged in the kind of robust economic analysis of Facebook and Instagram that would allow it to reach a conclusion as to whether those services are economically viable without the use of personalized advertising. 

However, there is a key institutional point to be made here. Privacy regulators are likely to be eminently unprepared to conduct this kind of analysis, which arguably should lead to significant deference to the observed choices of businesses and their customers.

Conclusion

A service’s use of its users’ personal data—whether for personalized advertising or other purposes—can be a problem, but it can also generate benefits. There is no shortcut to determine, in any given situation, whether the costs of a particular business model outweigh its benefits. Critically, the balance of costs and benefits from a business model’s technological and economic components is what truly determines whether any specific component is “necessary.” In the Meta decision, the EDPB got it wrong by refusing to incorporate the full economic and technological components of the company’s business model. 

European Union officials insist that the executive order President Joe Biden signed Oct. 7 to implement a new U.S.-EU data-privacy framework must address European concerns about U.S. agencies’ surveillance practices. Awaited since March, when U.S. and EU officials reached an agreement in principle on a new framework, the order is intended to replace an earlier data-privacy framework that was invalidated in 2020 by the Court of Justice of the European Union (CJEU) in its Schrems II judgment.

This post is the first in what will be a series of entries examining whether the new framework satisfies the requirements of EU law or, as some critics argue, whether it does not. The critics include Max Schrems’ organization NOYB (for “none of your business”), which has announced that it “will likely bring another challenge before the CJEU” if the European Commission officially decides that the new U.S. framework is “adequate.” In this introduction, I will highlight the areas of contention based on NOYB’s “first reaction.”

The overarching legal question that the European Commission (and likely also the CJEU) will need to answer, as spelled out in the Schrems II judgment, is whether the United States “ensures an adequate level of protection for personal data essentially equivalent to that guaranteed in the European Union by the GDPR, read in the light of Articles 7 and 8 of the [EU Charter of Fundamental Rights]” Importantly, as Theodore Christakis, Kenneth Propp, and Peter Swire point out, “adequate level” and “essential equivalence” of protection do not necessarily mean identical protection, either substantively or procedurally. The precise degree of flexibility remains an open question, however, and one that the EU Court may need to clarify to a much greater extent.

Proportionality and Bulk Data Collection

Under Article 52(1) of the EU Charter of Fundamental Rights, restrictions of the right to privacy must meet several conditions. They must be “provided for by law” and “respect the essence” of the right. Moreover, “subject to the principle of proportionality, limitations may be made only if they are necessary” and meet one of the objectives recognized by EU law or “the need to protect the rights and freedoms of others.”

As NOYB has acknowledged, the new executive order supplemented the phrasing “as tailored as possible” present in 2014’s Presidential Policy Directive on Signals Intelligence Activities (PPD-28) with language explicitly drawn from EU law: mentions of the “necessity” and “proportionality” of signals-intelligence activities related to “validated intelligence priorities.” But NOYB counters:

However, despite changing these words, there is no indication that US mass surveillance will change in practice. So-called “bulk surveillance” will continue under the new Executive Order (see Section 2 (c)(ii)) and any data sent to US providers will still end up in programs like PRISM or Upstream, despite of the CJEU declaring US surveillance laws and practices as not “proportionate” (under the European understanding of the word) twice.

It is true that the Schrems II Court held that U.S. law and practices do not “[correlate] to the minimum safeguards resulting, under EU law, from the principle of proportionality.” But it is crucial to note the specific reasons the Court gave for that conclusion. Contrary to what NOYB suggests, the Court did not simply state that bulk collection of data is inherently disproportionate. Instead, the reasons it gave were that “PPD-28 does not grant data subjects actionable rights before the courts against the US authorities” and that, under Executive Order 12333, “access to data in transit to the United States [is possible] without that access being subject to any judicial review.”

CJEU case law does not support the idea that bulk collection of data is inherently disproportionate under EU law; bulk collection may be proportionate, taking into account the procedural safeguards and the magnitude of interests protected in a given case. (For another discussion of safeguards, see the CJEU’s decision in La Quadrature du Net.) Further complicating the legal analysis here is that, as mentioned, it is far from obvious that EU law requires foreign countries offer the same procedural or substantive safeguards that are applicable within the EU.

Effective Redress

The Court’s Schrems II conclusion therefore primarily concerns the effective redress available to EU citizens against potential restrictions of their right to privacy from U.S. intelligence activities. The new two-step system proposed by the Biden executive order includes creation of a Data Protection Review Court (DPRC), which would be an independent review body with power to make binding decisions on U.S. intelligence agencies. In a comment pre-dating the executive order, Max Schrems argued that:

It is hard to see how this new body would fulfill the formal requirements of a court or tribunal under Article 47 CFR, especially when compared to ongoing cases and standards applied within the EU (for example in Poland and Hungary).

This comment raises two distinct issues. First, Schrems seems to suggest that an adequacy decision can only be granted if the available redress mechanism satisfies the requirements of Article 47 of the Charter. But this is a hasty conclusion. The CJEU’s phrasing in Schrems II is more cautious:

…Article 47 of the Charter, which also contributes to the required level of protection in the European Union, compliance with which must be determined by the Commission before it adopts an adequacy decision pursuant to Article 45(1) of the GDPR

In arguing that Article 47 “also contributes to the required level of protection,” the Court is not saying that it determines the required level of protection. This is potentially significant, given that the standard of adequacy is “essential equivalence,” not that it be procedurally and substantively identical. Moreover, the Court did not say that the Commission must determine compliance with Article 47 itself, but with the “required level of protection” (which, again, must be “essentially equivalent”).

Second, there is the related but distinct question of whether the redress mechanism is effective under the applicable standard of “required level of protection.” Christakis, Propp, and Swire offered a helpful analysis suggesting that it is, considering the proposed DPRC’s independence, effective investigative powers,  and authority to issue binding determinations. I will offer a more detailed analysis of this point in future posts.

Finally, NOYB raised a concern that “judgment by ‘Court’ [is] already spelled out in Executive Order.” This concern seems to be based on the view that a decision of the DPRC (“the judgment”) and what the DPRC communicates to the complainant are the same thing. Or in other words, that legal effects of a DPRC decision are exhausted by providing the individual with the neither-confirm-nor-deny statement set out in Section 3 of the executive order. This is clearly incorrect: the DPRC has power to issue binding directions to intelligence agencies. The actual binding determinations of the DPRC are not predetermined by the executive order, only the information to be provided to the complainant is.

What may call for closer consideration are issues of access to information and data. For example, in La Quadrature du Net, the CJEU looked at the difficult problem of notification of persons whose data has been subject to state surveillance, requiring individual notification “only to the extent that and as soon as it is no longer liable to jeopardise” the law-enforcement tasks in question. Given the “essential equivalence” standard applicable to third-country adequacy assessments, however, it does not automatically follow that individual notification is required in that context.

Moreover, it also does not necessarily follow that adequacy requires that EU citizens have a right to access the data processed by foreign government agencies. The fact that there are significant restrictions on rights to information and to access in some EU member states, though not definitive (after all, those countries may be violating EU law), may be instructive for the purposes of assessing the adequacy of data protection in a third country, where EU law requires only “essential equivalence.”

Conclusion

There are difficult questions of EU law that the European Commission will need to address in the process of deciding whether to issue a new adequacy decision for the United States. It is also clear that an affirmative decision from the Commission will be challenged before the CJEU, although the arguments for such a challenge are not yet well-developed. In future posts I will provide more detailed analysis of the pivotal legal questions. My focus will be to engage with the forthcoming legal analyses from Schrems and NOYB and from other careful observers.

[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

Federal Trade Commission (FTC) Chair Lina Khan has just sent her holiday wishlist to Santa Claus. It comes in the form of a policy statement on unfair methods of competition (UMC) that the FTC approved last week by a 3-1 vote. If there’s anything to be gleaned from the document, it’s that Khan and the agency’s majority bloc wish they could wield the same powers as Margrethe Vestager does in the European Union. Luckily for consumers, U.S. courts are unlikely to oblige.

Signed by the commission’s three Democratic commissioners, the UMC policy statement contains language that would be completely at home in a decision of the European Commission. It purports to reorient UMC enforcement (under Section 5 of the FTC Act) around typically European concepts, such as “competition on the merits.” This is an unambiguous repudiation of the rule of reason and, with it, the consumer welfare standard.

Unfortunately for its authors, these European-inspired aspirations are likely to fall flat. For a start, the FTC almost certainly does not have the power to enact such sweeping changes. More fundamentally, these concepts have been tried in the EU, where they have proven to be largely unworkable. On the one hand, critics (including the European judiciary) have excoriated the European Commission for its often economically unsound policymaking—enabled by the use of vague standards like “competition on the merits.” On the other hand, the Commission paradoxically believes that its competition powers are insufficient, creating the need for even stronger powers. The recently passed Digital Markets Act (DMA) is designed to fill this need.

As explained below, there is thus every reason to believe the FTC’s UMC statement will ultimately go down as a mistake, brought about by the current leadership’s hubris.

A Statement Is Just That

The first big obstacle to the FTC’s lofty ambitions is that its leadership does not have the power to rewrite either the FTC Act or courts’ interpretation of it. The agency’s leadership understands this much. And with that in mind, they ostensibly couch their statement in the case law of the U.S. Supreme Court:

Consistent with the Supreme Court’s interpretation of the FTC Act in at least twelve decisions, this statement makes clear that Section 5 reaches beyond the Sherman and Clayton Acts to encompass various types of unfair conduct that tend to negatively affect competitive conditions.

It is telling, however, that the cases cited by the agency—in a naked attempt to do away with economic analysis and the consumer welfare standard—are all at least 40 years old. Antitrust and consumer-protection laws have obviously come a long way since then, but none of that is mentioned in the statement. Inconvenient case law is simply shrugged off. To make matters worse, even the cases the FTC cites provide, at best, exceedingly weak support for its proposed policy.

For instance, as Commissioner Christine Wilson aptly notes in her dissenting statement, “the policy statement ignores precedent regarding the need to demonstrate anticompetitive effects.” Chief among these is the Boise Cascade Corp. v. FTC case, where the 9th U.S. Circuit Court of Appeals rebuked the FTC for failing to show actual anticompetitive effects:

In truth, the Commission has provided us with little more than a theory of the likely effect of the challenged pricing practices. While this general observation perhaps summarizes all that follows, we offer  the following specific points in support of our conclusion.

There is a complete absence of meaningful evidence in the record that price levels in the southern plywood industry reflect an anticompetitive effect.

In short, the FTC’s statement is just that—a statement. Gus Hurwitz summarized this best in his post:

Today’s news that the FTC has adopted a new UMC Policy Statement is just that: mere news. It doesn’t change the law. It is non-precedential and lacks the force of law. It receives the benefit of no deference. It is, to use a term from the consumer-protection lexicon, mere puffery.

Lina’s European Dream

But let us imagine, for a moment, that the FTC has its way and courts go along with its policy statement. Would this be good for the American consumer? In order to answer this question, it is worth looking at competition enforcement in the European Union.

There are, indeed, striking similarities between the FTC’s policy statement and European competition law. Consider the resemblance between the following quotes, drawn from the FTC’s policy statement (“A” in each example) and from the European competition sphere (“B” in each example).

Example 1 – Competition on the merits and the protection of competitors:

A. The method of competition must be unfair, meaning that the conduct goes beyond competition on the merits.… This may include, for example, conduct that tends to foreclose or impair the opportunities of market participants, reduce competition between rivals, limit choice, or otherwise harm consumers. (here)

B. The emphasis of the Commission’s enforcement activity… is on safeguarding the competitive process… and ensuring that undertakings which hold a dominant position do not exclude their competitors by other means than competing on the merits… (here)

Example 2 – Proof of anticompetitive harm:

A. “Unfair methods of competition” need not require a showing of current anticompetitive harm or anticompetitive intent in every case. … [T]his inquiry does not turn to whether the conduct directly caused actual harm in the specific instance at issue. (here)

B. The Commission cannot be required… systematically to establish a counterfactual scenario…. That would, moreover, oblige it to demonstrate that the conduct at issue had actual effects, which…  is not required in the case of an abuse of a dominant position, where it is sufficient to establish that there are potential effects. (here)

    Example 3 – Multiple goals:

    A. Given the distinctive goals of Section 5, the inquiry will not focus on the “rule of reason” inquiries more common in cases under the Sherman Act, but will instead focus on stopping unfair methods of competition in their incipiency based on their tendency to harm competitive conditions. (here)

    B. In its assessment the Commission should pursue the objectives of preserving and fostering innovation and the quality of digital products and services, the degree to which prices are fair and competitive, and the degree to which quality or choice for business users and for end users is or remains high. (here)

    Beyond their cosmetic resemblances, these examples reflect a deeper similarity. The FTC is attempting to introduce three core principles that also undergird European competition enforcement. The first is that enforcers should protect “the competitive process” by ensuring firms compete “on the merits,” rather than a more consequentialist goal like the consumer welfare standard (which essentially asks how a given practice affects economic output). The second is that enforcers should not be required to establish that conduct actually harms consumers. Instead, they need only show that such an outcome is (or will be) possible. The third principle is that competition policies pursue multiple, sometimes conflicting, goals.

    In short, the FTC is trying to roll back U.S. enforcement to a bygone era predating the emergence of the consumer welfare standard (which is somewhat ironic for the agency’s progressive leaders). And this vision of enforcement is infused with elements that appear to be drawn directly from European competition law.

    Europe Is Not the Land of Milk and Honey

    All of this might not be so problematic if the European model of competition enforcement that the FTC now seeks to emulate was an unmitigated success, but that could not be further from the truth. As Geoffrey Manne, Sam Bowman, and I argued in a recently published paper, the European model has several shortcomings that militate against emulating it (the following quotes are drawn from that paper). These problems would almost certainly arise if the FTC’s statement was blessed by courts in the United States.

    For a start, the more open-ended nature of European competition law makes it highly vulnerable to political interference. This is notably due to its multiple, vague, and often conflicting goals, such as the protection of the “competitive process”:

    Because EU regulators can call upon a large list of justifications for their enforcement decisions, they are free to pursue cases that best fit within a political agenda, rather than focusing on the limited practices that are most injurious to consumers. In other words, there is largely no definable set of metrics to distinguish strong cases from weak ones under the EU model; what stands in its place is political discretion.

    Politicized antitrust enforcement might seem like a great idea when your party is in power but, as Milton Friedman wisely observed, the mark of a strong system of government is that it operates well with the wrong person in charge. With this in mind, the FTC’s current leadership would do well to consider what their political opponents might do with these broad powers—such as using Section 5 to prevent online platforms from moderating speech.

    A second important problem with the European model is that, because of its competitive-process goal, it does not adequately distinguish between exclusion resulting from superior efficiency and anticompetitive foreclosure:

    By pursuing a competitive process goal, European competition authorities regularly conflate desirable and undesirable forms of exclusion precisely on the basis of their effect on competitors. As a result, the Commission routinely sanctions exclusion that stems from an incumbent’s superior efficiency rather than welfare-reducing strategic behavior, and routinely protects inefficient competitors that would otherwise rightly be excluded from a market.

    This vastly enlarges the scope of potential antitrust liability, leading to risks of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms, while increasing compliance costs because of reduced legal certainty. Ultimately, this may hamper technological evolution and protect inefficient firms whose eviction from the market is merely a reflection of consumer preferences.

    Finally, the European model results in enforcers having more discretion and enjoying greater deference from the courts:

    [T]he EU process is driven by a number of laterally equivalent, and sometimes mutually exclusive, goals.… [A] large problem exists in the discretion that this fluid arrangement of goals yields.

    The Microsoft case illustrates this problem well. In Microsoft, the Commission could have chosen to base its decision on a number of potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice”. The Commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains because “consumer choice” among a variety of media players was more important.

    In short, the European model sorely lacks limiting principles. This likely explains why the European Court of Justice has started to pare back the commission’s powers in a series of recent cases, including Intel, Post Danmark, Cartes Bancaires, and Servizio Elettrico Nazionale. These rulings appear to be an explicit recognition that overly broad competition enforcement not only fails to benefit consumers but, more fundamentally, is incompatible with the rule of law.

    It is unfortunate that the FTC is trying to emulate a model of competition enforcement that—even in the progressively minded European public sphere—is increasingly questioned and cast aside as a result of its multiple shortcomings.

    The concept of European “digital sovereignty” has been promoted in recent years both by high officials of the European Union and by EU national governments. Indeed, France made strengthening sovereignty one of the goals of its recent presidency in the EU Council.

    The approach taken thus far both by the EU and by national authorities has been not to exclude foreign businesses, but instead to focus on research and development funding for European projects. Unfortunately, there are worrying signs that this more measured approach is beginning to be replaced by ill-conceived moves toward economic protectionism, ostensibly justified by national-security and personal-privacy concerns.

    In this context, it is worth reconsidering why Europeans’ best interests are best served not by economic isolationism, but by an understanding of sovereignty that capitalizes on alliances with other free democracies.

    Protectionism Under the Guise of Cybersecurity

    Among the primary worrying signs regarding the EU’s approach to digital sovereignty is the union’s planned official cybersecurity-certification scheme. The European Commission is reportedly pushing for “digital sovereignty” conditions in the scheme, which would include data and corporate-entity localization and ownership requirements. This can be categorized as “hard” data localization in the taxonomy laid out by Peter Swire and DeBrae Kennedy-Mayo of Georgia Institute of Technology, in that it would prohibit both data transfers to other countries and for foreign capital to be involved in processing even data that is not transferred.

    The European Cybersecurity Certification Scheme for Cloud Services (EUCS) is being prepared by ENISA, the EU cybersecurity agency. The scheme is supposed to be voluntary at first, but it is expected that it will become mandatory in the future, at least for some situations (e.g., public procurement). It was not initially billed as an industrial-policy measure and was instead meant to focus on technical security issues. Moreover, ENISA reportedly did not see the need to include such “digital sovereignty” requirements in the certification scheme, perhaps because they saw them as insufficiently grounded in genuine cybersecurity needs.

    Despite ENISA’s position, the European Commission asked the agency to include the digital–sovereignty requirements. This move has been supported by a coalition of European businesses that hope to benefit from the protectionist nature of the scheme. Somewhat ironically, their official statement called on the European Commission to “not give in to the pressure of the ones who tend to promote their own economic interests,”

    The governments of Denmark, Estonia, Greece, Ireland, Netherlands, Poland, and Sweden expressed “strong concerns” about the Commission’s move. In contrast, Germany called for a political discussion of the certification scheme that would take into account “the economic policy perspective.” In other words, German officials want the EU to consider using the cybersecurity-certification scheme to achieve protectionist goals.

    Cybersecurity certification is not the only avenue by which Brussels appears to be pursuing protectionist policies under the guise of cybersecurity concerns. As highlighted in a recent report from the Information Technology & Innovation Foundation, the European Commission and other EU bodies have also been downgrading or excluding U.S.-owned firms from technical standard-setting processes.

    Do Security and Privacy Require Protectionism?

    As others have discussed at length (in addition to Swire and Kennedy-Mayo, also Theodore Christakis) the evidence for cybersecurity and national-security arguments for hard data localization have been, at best, inconclusive. Press reports suggest that ENISA reached a similar conclusion. There may be security reasons to insist upon certain ways of distributing data storage (e.g., across different data centers), but those reasons are not directly related to the division of national borders.

    In fact, as illustrated by the well-known architectural goal behind the design of the U.S. military computer network that was the precursor to the Internet, security is enhanced by redundant distribution of data and network connections in a geographically dispersed way. The perils of putting “all one’s data eggs” in one basket (one locale, one data center) were amply illustrated when a fire in a data center of a French cloud provider, OVH, famously brought down millions of websites that were only hosted there. (Notably, OVH is among the most vocal European proponents of hard data localization).

    Moreover, security concerns are clearly not nearly as serious when data is processed by our allies as it when processed by entities associated with less friendly powers. Whatever concerns there may be about U.S. intelligence collection, it would be detached from reality to suggest that the United States poses a national-security risk to EU countries. This has become even clearer since the beginning of the Russian invasion of Ukraine. Indeed, the strength of the U.S.-EU security relationship has been repeatedly acknowledged by EU and national officials.

    Another commonly used justification for data localization is that it is required to protect Europeans’ privacy. The radical version of this position, seemingly increasingly popular among EU data-protection authorities, amounts to a call to block data flows between the EU and the United States. (Most bizarrely, Russia seems to receive a more favorable treatment from some European bureaucrats). The legal argument behind this view is that the United States doesn’t have sufficient legal safeguards when its officials process the data of foreigners.

    The soundness of that view is debated, but what is perhaps more interesting is that similar privacy concerns have also been identified by EU courts with respect to several EU countries. The reaction of those European countries was either to ignore the courts, or to be “ruthless in exploiting loopholes” in court rulings. It is thus difficult to treat seriously the claims that Europeans’ data is much better safeguarded in their home countries than if it flows in the networks of the EU’s democratic allies, like the United States.

    Digital Sovereignty as Industrial Policy

    Given the above, the privacy and security arguments are unlikely to be the real decisive factors behind the EU’s push for a more protectionist approach to digital sovereignty, as in the case of cybersecurity certification. In her 2020 State of the Union speech, EU Commission President Ursula von der Leyen stated that Europe “must now lead the way on digital—or it will have to follow the way of others, who are setting these standards for us.”

    She continued: “On personalized data—business to consumer—Europe has been too slow and is now dependent on others. This cannot happen with industrial data.” This framing suggests an industrial-policy aim behind the digital-sovereignty agenda. But even in considering Europe’s best interests through the lens of industrial policy, there are reasons to question the manner in which “leading the way on digital” is being implemented.

    Limitations on foreign investment in European tech businesses come with significant costs to the European tech ecosystem. Those costs are particularly high in the case of blocking or disincentivizing American investment.

    Effect on startups

    Early-stage investors such as venture capitalists bring more than just financial capital. They offer expertise and other vital tools to help the businesses in which they invest. It is thus not surprising that, among the best investors, those with significant experience in a given area are well-represented. Due to the successes of the U.S. tech industry, American investors are especially well-positioned to play this role.

    In contrast, European investors may lack the needed knowledge and skills. For example, in its report on building “deep tech” companies in Europe, Boston Consulting Group noted that a “substantial majority of executives at deep-tech companies and more than three-quarters of the investors we surveyed believe that European investors do not have a good understanding of what deep tech is.”

    More to the point, even where EU players do hold advantages, a cooperative economic and technological system will allow the comparative advantage of both U.S. and EU markets to redound to each others’ benefit. That is to say, of course not all U.S. investment expertise will apply in the EU, but certainly some will. Similarly, there will be EU firms that are positioned to share their expertise in the United States. But there is no ex ante way to know when and where these complementarities will exist, which essentially dooms efforts at centrally planning technological cooperation.

    Given the close economic, cultural, and historical ties of the two regions, it makes sense to work together, particularly given the rising international-relations tensions outside of the western sphere. It also makes sense, insofar as the relatively open private-capital-investment environment in the United States is nearly impossible to match, let alone surpass, through government spending.

    For example, national government and EU funding in Europe has thus far ranged from expensive failures (the “Google-killer”) to the all-too-predictable bureaucracy-heavy grantmaking, the beneficiaries of which describe as lacking flexibility, “slow,” “heavily process-oriented,” and expensive for businesses to navigate. As reported by the Financial Times’ Sifted website, the EU’s own startup-investment scheme (the European Innovation Council) backed only one business over more than a year, and it had “delays in payment” that “left many startups short of cash—and some on the brink of going out of business.”

    Starting new business ventures is risky, especially for the founders. They risk devoting their time, resources, and reputation to an enterprise that may very well fail. Given this risk of failure, the potential upside needs to be sufficiently high to incentivize founders and early employees to take the gamble. This upside is normally provided by the possibility of selling one’s shares in a business. In BCG’s previously cited report on deep tech in Europe, respondents noted that the European ecosystem lacks “clear exit opportunities”:

    Some investors fear being constrained by European sovereignty concerns through vetoes at the state or Europe level or by rules potentially requiring European ownership for deep-tech companies pursuing strategically important technologies. M&A in Europe does not serve as the active off-ramp it provides in the US. From a macroeconomic standpoint, in the current environment, investment and exit valuations may be impaired by inflation or geopolitical tensions.

    More broadly, those exit opportunities also factor importantly into funders’ appetite to price the risk of failure in their ventures. Where the upside is sufficiently large, an investor might be willing to experiment in riskier ventures and be suitably motivated to structure investments to deal with such risks. But where the exit opportunities are diminished, it makes much more sense to spend time on safer bets that may provide lower returns, but are less likely to fail. Coupled with the fact that government funding must run through bureaucratic channels, which are inherently risk averse, the overall effect is a less dynamic funding system.

    The Central and Eastern Europe (CEE) region is an especially good example of the positive influence of American investment in Europe’s tech ecosystem. According to the state-owned Polish Development Fund and Dealroom.co, in 2019, $0.9 billion of venture-capital investment in CEE came from the United States, $0.5 billion from Europe, and $0.1 billion from the rest of the world.

    Direct investment

    Technological investment is rarely, if ever, a zero-sum game. U.S. firms that invest in the EU (and vice versa) do not do so as foreign conquerors, but as partners whose own fortunes are intertwined with their host country. Consider, for example, Google’s recent PLN 2.7 billion investment in Poland. Far from extractive, that investment will build infrastructure in Poland, and will employ an additional 2,500 Poles in the company’s cloud-computing division. This sort of partnership plants the seeds that grow into a native tech ecosystem. The Poles that today work in Google’s cloud-computing division are the founders of tomorrow’s innovative startups rooted in Poland.

    The funding that accompanies native operations of foreign firms also has a direct impact on local economies and tech ecosystems. More local investment in technology creates demand for education and support roles around that investment. This creates a virtuous circle that ultimately facilitates growth in the local ecosystem. And while this direct investment is important for large countries, in smaller countries, it can be a critical component in stimulating their own participation in the innovation economy. 

    According to Crunchbase, out of 2,617 EU-headquartered startups founded since 2010 with total equity funding amount of at least $10 million, 927 (35%) had at least one founder who previously worked for an American company. For example, two of the three founders of Madrid-based Seedtag (total funding of more than $300 million) worked at Google immediately before starting Seedtag.

    It is more difficult to quantify how many early employees of European startups built their experience in American-owned companies, but it is likely to be significant and to become even more so, especially in regions—like Central and Eastern Europe—with significant direct U.S. investment in local talent.

    Conclusion

    Explicit industrial policy for protectionist ends is—at least, for the time being—regarded as unwise public policy. But this is not to say that countries do not have valid national interests that can be met through more productive channels. While strong data-localization requirements is ultimately counterproductive, particularly among closely allied nations, countries have a legitimate interest in promoting the growth of the technology sector within their borders.

    National investment in R&D can yield fruit, particularly when that investment works in tandem with the private sector (see, e.g., the Bayh-Dole Act in the United States). The bottom line, however, is that any intervention should take care to actually promote the ends it seeks. Strong data-localization policies in the EU will not lead to success of the local tech industry, but it will serve to wall the region off from the kind of investment that can make it thrive.

    [TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

    Things are heating up in the antitrust world. There is considerable pressure to pass the American Innovation and Choice Online Act (AICOA) before the congressional recess in August—a short legislative window before members of Congress shift their focus almost entirely to campaigning for the mid-term elections. While it would not be impossible to advance the bill after the August recess, it would be a steep uphill climb.

    But whether it passes or not, some of the damage from AICOA may already be done. The bill has moved the antitrust dialogue that will harm innovation and consumers. In this post, I will first explain AICOA’s fundamental flaws. Next, I discuss the negative impact that the legislation is likely to have if passed, even if courts and agencies do not aggressively enforce its provisions. Finally, I show how AICOA has already provided an intellectual victory for the approach articulated in the European Union (EU)’s Digital Markets Act (DMA). It has built momentum for a dystopian regulatory framework to break up and break into U.S. superstar firms designated as “gatekeepers” at the expense of innovation and consumers.

    The Unseen of AICOA

    AICOA’s drafters argue that, once passed, it will deliver numerous economic benefits. Sen. Amy Klobuchar (D-Minn.)—the bill’s main sponsor—has stated that it will “ensure small businesses and entrepreneurs still have the opportunity to succeed in the digital marketplace. This bill will do just that while also providing consumers with the benefit of greater choice online.”

    Section 3 of the bill would provide “business users” of the designated “covered platforms” with a wide range of entitlements. This includes preventing the covered platform from offering any services or products that a business user could provide (the so-called “self-preferencing” prohibition); allowing a business user access to the covered platform’s proprietary data; and an entitlement for business users to have “preferred placement” on a covered platform without having to use any of that platform’s services.

    These entitlements would provide non-platform businesses what are effectively claims on the platform’s proprietary assets, notwithstanding the covered platform’s own investments to collect data, create services, and invent products—in short, the platform’s innovative efforts. As such, AICOA is redistributive legislation that creates the conditions for unfair competition in the name of “fair” and “open” competition. It treats the behavior of “covered platforms” differently than identical behavior by their competitors, without considering the deterrent effect such a framework will have on consumers and innovation. Thus, AICOA offers rent-seeking rivals a formidable avenue to reap considerable benefits at the expense of the innovators thanks to the weaponization of antitrust to subvert, not improve, competition.

    In mandating that covered platforms make their data and proprietary assets freely available to “business users” and rivals, AICOA undermines the underpinning of free markets to pursue the misguided goal of “open markets.” The inevitable result will be the tragedy of the commons. Absent the covered platforms having the ability to benefit from their entrepreneurial endeavors, the law no longer encourages innovation. As Joseph Schumpeter seminally predicted: “perfect competition implies free entry into every industry … But perfectly free entry into a new field may make it impossible to enter it at all.”

    To illustrate, if business users can freely access, say, a special status on the covered platforms’ ancillary services without having to use any of the covered platform’s services (as required under Section 3(a)(5)), then platforms are disincentivized from inventing zero-priced services, since they cannot cross-monetize these services with existing services. Similarly, if, under Section 3(a)(1) of the bill, business users can stop covered platforms from pre-installing or preferencing an app whenever they happen to offer a similar app, then covered platforms will be discouraged from investing in or creating new apps. Thus, the bill would generate a considerable deterrent effect for covered platforms to invest, invent, and innovate.

    AICOA’s most detrimental consequences may not be immediately apparent; they could instead manifest in larger and broader downstream impacts that will be difficult to undo. As the 19th century French economist Frederic Bastiat wrote: “a law gives birth not only to an effect but to a series of effects. Of these effects, the first only is immediate; it manifests itself simultaneously with its cause—it is seen. The others unfold in succession—they are not seen it is well for, if they are foreseen … it follows that the bad economist pursues a small present good, which will be followed by a great evil to come, while the true economist pursues a great good to come,—at the risk of a small present evil.”

    To paraphrase Bastiat, AICOA offers ill-intentioned rivals a “small present good”–i.e., unconditional access to the platforms’ proprietary assets–while society suffers the loss of a greater good–i.e., incentives to innovate and welfare gains to consumers. The logic is akin to those who advocate the abolition of intellectual-property rights: The immediate (and seen) gain is obvious, concerning the dissemination of innovation and a reduction of the price of innovation, while the subsequent (and unseen) evil remains opaque, as the destruction of the institutional premises for innovation will generate considerable long-term innovation costs.

    Fundamentally, AICOA weakens the benefits of scale by pursuing vertical disintegration of the covered platforms to the benefit of short-term static competition. In the long term, however, the bill would dampen dynamic competition, ultimately harming consumer welfare and the capacity for innovation. The measure’s opportunity costs will prevent covered platforms’ innovations from benefiting other business users or consumers. They personify the “unseen,” as Bastiat put it: “[they are] always in the shadow, and who, personifying what is not seen, [are] an essential element of the problem. [They make] us understand how absurd it is to see a profit in destruction.”

    The costs could well amount to hundreds of billions of dollars for the U.S. economy, even before accounting for the costs of deterred innovation. The unseen is costly, the seen is cheap.

    A New Robinson-Patman Act?

    Most antitrust laws are terse, vague, and old: The Sherman Act of 1890, the Federal Trade Commission Act, and the Clayton Act of 1914 deal largely in generalities, with considerable deference for courts to elaborate in a common-law tradition on the specificities of what “restraints of trade,” “monopolization,” or “unfair methods of competition” mean.

    In 1936, Congress passed the Robinson-Patman Act, designed to protect competitors from the then-disruptive competition of large firms who—thanks to scale and practices such as price differentiation—upended traditional incumbents to the benefit of consumers. Passed after “Congress made no factual investigation of its own, and ignored evidence that conflicted with accepted rhetoric,” the law prohibits price differentials that would benefit buyers, and ultimately consumers, in the name of less vigorous competition from more efficient, more productive firms. Indeed, under the Robinson-Patman Act, manufacturers cannot give a bigger discount to a distributor who would pass these savings onto consumers, even if the distributor performs extra services relative to others.

    Former President Gerald Ford declared in 1975 that the Robinson-Patman Act “is a leading example of [a law] which restrain[s] competition and den[ies] buyers’ substantial savings…It discourages both large and small firms from cutting prices, making it harder for them to expand into new markets and pass on to customers the cost-savings on large orders.” Despite this, calls to amend or repeal the Robinson-Patman Act—supported by, among others, competition scholars like Herbert Hovenkamp and Robert Bork—have failed.

    In the 1983 Abbott decision, Justice Lewis Powell wrote: “The Robinson-Patman Act has been widely criticized, both for its effects and for the policies that it seeks to promote. Although Congress is aware of these criticisms, the Act has remained in effect for almost half a century.”

    Nonetheless, the act’s enforcement dwindled, thanks to wise reactions from antitrust agencies and the courts. While it is seldom enforced today, the act continues to create considerable legal uncertainty, as it raises regulatory risks for companies who engage in behavior that may conflict with its provisions. Indeed, many of the same so-called “neo-Brandeisians” who support passage of AICOA also advocate reinvigorating Robinson-Patman. More specifically, the new FTC majority has expressed that it is eager to revitalize Robinson-Patman, even as the law protects less efficient competitors. In other words, the Robinson-Patman Act is a zombie law: dead, but still moving.

    Even if the antitrust agencies and courts ultimately follow the same path of regulatory and judicial restraint on AICOA that they have on Robinson-Patman, the legal uncertainty its existence will engender will act as a powerful deterrent on disruptive competition that dynamically benefits consumers and innovation. In short, like the Robinson-Patman Act, antitrust agencies and courts will either enforce AICOA–thus, generating the law’s adverse effects on consumers and innovation–or they will refrain from enforcing AICOA–but then, the legal uncertainty shall lead to unseen, harmful effects on innovation and consumers.

    For instance, the bill’s prohibition on “self-preferencing” in Section 3(a)(1) will prevent covered platforms from offering consumers new products and services that happen to compete with incumbents’ products and services. Self-preferencing often is a pro-competitive, pro-efficiency practice that companies widely adopt—a reality that AICOA seems to ignore.

    Would AICOA prevent, e.g., Apple from offering a bundled subscription to Apple One, which includes Apple Music, so that the company can effectively compete with incumbents like Spotify? As with Robinson-Patman, antitrust agencies and courts will have to choose whether to enforce a productivity-decreasing law, or to ignore congressional intent but, in the process, generate significant legal uncertainties.

    Judge Bork once wrote that Robinson-Patman was “antitrust’s least glorious hour” because, rather than improving competition and innovation, it reduced competition from firms who happen to be more productive, innovative, and efficient than their rivals. The law infamously protected inefficient competitors rather than competition. But from the perspective of legislative history perspective, AICOA may be antitrust’s new “least glorious hour.” If adopted, it will adversely affect innovation and consumers, as opportunistic rivals will be able to prevent cost-saving practices by the covered platforms.

    As with Robinson-Patman, calls to amend or repeal AICOA may follow its passage. But Robinson-Patman Act illustrates the path dependency of bad antitrust laws. However costly and damaging, AICOA would likely stay in place, with regular calls for either stronger or weaker enforcement, depending on whether the momentum shifts from populist antitrust or antitrust more consistent with dynamic competition.

    Victory of the Brussels Effect

    The future of AICOA does not bode well for markets, either from a historical perspective or from a comparative-law perspective. The EU’s DMA similarly targets a few large tech platforms but it is broader, harsher, and swifter. In the competition between these two examples of self-inflicted techlash, AICOA will pale in comparison with the DMA. Covered platforms will be forced to align with the DMA’s obligations and prohibitions.

    Consequently, AICOA is a victory of the DMA and of the Brussels effect in general. AICOA effectively crowns the DMA as the all-encompassing regulatory assault on digital gatekeepers. While members of Congress have introduced numerous antitrust bills aimed at targeting gatekeepers, the DMA is the one-stop-shop regulation that encompasses multiple antitrust bills and imposes broader prohibitions and stronger obligations on gatekeepers. In other words, the DMA outcompetes AICOA.

    Commentators seldom lament the extraterritorial impact of European regulations. Regarding regulating digital gatekeepers, U.S. officials should have pushed back against the innovation-stifling, welfare-decreasing effects of the DMA on U.S. tech companies, in particular, and on U.S. technological innovation, in general. To be fair, a few U.S. officials, such as Commerce Secretary Gina Raimundo, did voice opposition to the DMA. Indeed, well-aware of the DMA’s protectionist intent and its potential to break up and break into tech platforms, Raimundo expressed concerns that antitrust should not be about protecting competitors and deterring innovation but rather about protecting the process of competition, however disruptive may be.

    The influential neo-Brandeisians and radical antitrust reformers, however, lashed out at Raimundo and effectively shamed the Biden administration into embracing the DMA (and its sister regulation, AICOA). Brussels did not have to exert its regulatory overreach; the U.S. administration happily imports and emulates European overregulation. There is no better way for European officials to see their dreams come true: a techlash against U.S. digital platforms that enjoys the support of local officials.

    In that regard, AICOA has already played a significant role in shaping the intellectual mood in Washington and in altering the course of U.S. antitrust. Members of Congress designed AICOA along the lines pioneered by the DMA. Sen. Klobuchar has argued that America should emulate European competition policy regarding tech platforms. Lina Khan, now chair of the FTC, co-authored the U.S. House Antitrust Subcommittee report, which recommended adopting the European concept of “abuse of dominant position” in U.S. antitrust. In her current position, Khan now praises the DMA. Tim Wu, competition counsel for the White House, has praised European competition policy and officials. Indeed, the neo-Brandeisians’ have not only praised the European Commission’s fines against U.S. tech platforms (despite early criticisms from former President Barack Obama) but have more dramatically called for the United States to imitate the European regulatory framework.

    In this regulatory race to inefficiency, the standard is set in Brussels with the blessings of U.S. officials. Not even the precedent set by the EU’s General Data Protection Regulation (GDPR) fully captures the effects the DMA will have. Privacy laws passed by U.S. states’ privacy have mostly reacted to the reality of the GDPR. With AICOA, Congress is proactively anticipating, emulating, and welcoming the DMA before it has even been adopted. The intellectual and policy shift is historical, and so is the policy error.

    AICOA and the Boulevard of Broken Dreams

    AICOA is a failure similar to the Robinson-Patman Act and a victory for the Brussels effect and the DMA. Consumers will be the collateral damages, and the unseen effects on innovation will take years before they materialize. Calls for amendments and repeals of AICOA are likely to fail, so that the inevitable costs will forever bear upon consumers and innovation dynamics.

    AICOA illustrates the neo-Brandeisian opposition to large innovative companies. Joseph Schumpeter warned against such hostility and its effect on disincentivizing entrepreneurs to innovate when he wrote:

    Faced by the increasing hostility of the environment and by the legislative, administrative, and judicial practice born of that hostility, entrepreneurs and capitalists—in fact the whole stratum that accepts the bourgeois scheme of life—will eventually cease to function. Their standard aims are rapidly becoming unattainable, their efforts futile.

    President William Howard Taft once said, “the world is not going to be saved by legislation.” AICOA will not save antitrust, nor will consumers. To paraphrase Schumpeter, the bill’s drafters “walked into our future as we walked into the war, blindfolded.” AICOA’s intentions to deliver greater competition, a fairer marketplace, greater consumer choice, and more consumer benefits will ultimately scatter across the boulevard of broken dreams.

    The Baron de Montesquieu once wrote that legislators should only change laws with a “trembling hand”:

    It is sometimes necessary to change certain laws. But the case is rare, and when it happens, they should be touched only with a trembling hand: such solemnities should be observed, and such precautions are taken that the people will naturally conclude that the laws are indeed sacred since it takes so many formalities to abrogate them.

    AICOA’s drafters had a clumsy hand, coupled with what Friedrich Hayek would call “a pretense of knowledge.” They were certain to do social good and incapable of thinking of doing social harm. The future will remember AICOA as the new antitrust’s least glorious hour, where consumers and innovation were sacrificed on the altar of a revitalized populist view of antitrust.

    [TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

    In Free to Choose, Milton Friedman famously noted that there are four ways to spend money[1]:

    1. Spending your own money on yourself. For example, buying groceries or lunch. There is a strong incentive to economize and to get full value.
    2. Spending your own money on someone else. For example, buying a gift for another. There is a strong incentive to economize, but perhaps less to achieve full value from the other person’s point of view. Altruism is admirable, but it differs from value maximization, since—strictly speaking—giving cash would maximize the other’s value. Perhaps the point of a gift is that it does not amount to cash and the maximization of the other person’s welfare from their point of view.
    3. Spending someone else’s money on yourself. For example, an expensed business lunch. “Pass me the filet mignon and Chateau Lafite! Do you have one of those menus without any prices?” There is a strong incentive to get maximum utility, but there is little incentive to economize.
    4. Spending someone else’s money on someone else. For example, applying the proceeds of taxes or donations. There may be an indirect desire to see utility, but incentives for quality and cost management are often diminished.

    This framework can be criticized. Altruism has a role. Not all motives are selfish. There is an important role for action to help those less fortunate, which might mean, for instance, that a charity gains more utility from category (4) (assisting the needy) than from category (3) (the charity’s holiday party). It always depends on the facts and the context. However, there is certainly a grain of truth in the observation that charity begins at home and that, in the final analysis, people are best at managing their own affairs.

    How would this insight apply to data interoperability? The difficult cases of assisting the needy do not arise here: there is no serious sense in which data interoperability does, or does not, result in destitution. Thus, Friedman’s observations seem to ring true: when spending data, those whose data it is seem most likely to maximize its value. This is especially so where collection of data responds to incentives—that is, the amount of data collected and processed responds to how much control over the data is possible.

    The obvious exception to this would be a case of market power. If there is a monopoly with persistent barriers to entry, then the incentive may not be to maximize total utility, and therefore to limit data handling to the extent that a higher price can be charged for the lesser amount of data that does remain available. This has arguably been seen with some data-handling rules: the “Jedi Blue” agreement on advertising bidding, Apple’s Intelligent Tracking Prevention and App Tracking Transparency, and Google’s proposed Privacy Sandbox, all restrict the ability of others to handle data. Indeed, they may fail Friedman’s framework, since they amount to the platform deciding how to spend others’ data—in this case, by not allowing them to collect and process it at all.

    It should be emphasized, though, that this is a special case. It depends on market power, and existing antitrust and competition laws speak to it. The courts will decide whether cases like Daily Mail v Google and Texas et al. v Google show illegal monopolization of data flows, so as to fall within this special case of market power. Outside the United States, cases like the U.K. Competition and Markets Authority’s Google Privacy Sandbox commitments and the European Union’s proposed commitments with Amazon seek to allow others to continue to handle their data and to prevent exclusivity from arising from platform dynamics, which could happen if a large platform prevents others from deciding how to account for data they are collecting. It will be recalled that even Robert Bork thought that there was risk of market power harms from the large Microsoft Windows platform a generation ago.[2] Where market power risks are proven, there is a strong case that data exclusivity raises concerns because of an artificial barrier to entry. It would only be if the benefits of centralized data control were to outweigh the deadweight loss from data restrictions that this would be untrue (though query how well the legal processes verify this).

    Yet the latest proposals go well beyond this. A broad interoperability right amounts to “open season” for spending others’ data. This makes perfect sense in the European Union, where there is no large domestic technology platform, meaning that the data is essentially owned via foreign entities (mostly, the shareholders of successful U.S. and Chinese companies). It must be very tempting to run an industrial policy on the basis that “we’ll never be Google” and thus to embrace “sharing is caring” as to others’ data.

    But this would transgress the warning from Friedman: would people optimize data collection if it is open to mandatory sharing even without proof of market power? It is deeply concerning that the EU’s DATA Act is accompanied by an infographic that suggests that coffee-machine data might be subject to mandatory sharing, to allow competition in services related to the data (e.g., sales of pods; spare-parts automation). There being no monopoly in coffee machines, this simply forces vertical disintegration of data collection and handling. Why put a data-collection system into a coffee maker at all, if it is to be a common resource? Friedman’s category (4) would apply: the data is taken and spent by another. There is no guarantee that there would be sensible decision making surrounding the resource.

    It will be interesting to see how common-law jurisdictions approach this issue. At the risk of stating the obvious, the polity in continental Europe differs from that in the English-speaking democracies when it comes to whether the collective, or the individual, should be in the driving seat. A close read of the UK CMA’s Google commitments is interesting, in that paragraph 30 requires no self-preferencing in data collection and requires future data-handling systems to be designed with impacts on competition in mind. No doubt the CMA is seeking to prevent data-handling exclusivity on the basis that this prevents companies from using their data collection to compete. This is far from the EU DATA Act’s position in that it is certainly not a right to handle Google’s data: it is simply a right to continue to process one’s own data.

    U.S. proposals are at an earlier stage. It would seem important, as a matter of principle, not to make arbitrary decisions about vertical integration in data systems, and to identify specific market-power concerns instead, in line with common-law approaches to antitrust.

    It might be very attractive to the EU to spend others’ data on their behalf, but that does not make it right. Those working on the U.S. proposals would do well to ensure that there is a meaningful market-power gate to avoid unintended consequences.

    Disclaimer: The author was engaged for expert advice relating to the UK CMA’s Privacy Sandbox case on behalf of the complainant Marketers for an Open Web.


    [1] Milton Friedman, Free to Choose, 1980, pp.115-119

    [2] Comments at the Yale Law School conference, Robert H. Bork’s influence on Antitrust Law, Sep. 27-28, 2013.

    Just three weeks after a draft version of the legislation was unveiled by congressional negotiators, the American Data Privacy and Protection Act (ADPPA) is heading to its first legislative markup, set for tomorrow morning before the U.S. House Energy and Commerce Committee’s Consumer Protection and Commerce Subcommittee.

    Though the bill’s legislative future remains uncertain, particularly in the U.S. Senate, it would be appropriate to check how the measure compares with, and could potentially interact with, the comprehensive data-privacy regime promulgated by the European Union’s General Data Protection Regulation (GDPR). A preliminary comparison of the two shows that the ADPPA risks adopting some of the GDPR’s flaws, while adding some entirely new problems.

    A common misconception about the GDPR is that it imposed a requirement for “cookie consent” pop-ups that mar the experience of European users of the Internet. In fact, this requirement comes from a different and much older piece of EU law, the 2002 ePrivacy Directive. In most circumstances, the GDPR itself does not require express consent for cookies or other common and beneficial mechanisms to keep track of user interactions with a website. Website publishers could likely rely on one of two lawful bases for data processing outlined in Article 6 of the GDPR:

    • data processing is necessary in connection with a contractual relationship with the user, or
    • “processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party” (unless overridden by interests of the data subject).

    For its part, the ADPPA generally adopts the “contractual necessity” basis for data processing but excludes the option to collect or process “information identifying an individual’s online activities over time or across third party websites.” The ADPPA instead classifies such information as “sensitive covered data.” It’s difficult to see what benefit users would derive from having to click that they “consent” to features that are clearly necessary for the most basic functionality, such as remaining logged in to a site or adding items to an online shopping cart. But the expected result will be many, many more popup consent queries, like those that already bedevil European users.

    Using personal data to create new products

    Section 101(a)(1) of the ADPPA expressly allows the use of “covered data” (personal data) to “provide or maintain a specific product or service requested by an individual.” But the legislation is murkier when it comes to the permissible uses of covered data to develop new products. This would only clearly be allowed where each data subject concerned could be asked if they “request” the specific future product. By contrast, under the GDPR, it is clear that a firm can ask for user consent to use their data to develop future products.

    Moving beyond Section 101, we can look to the “general exceptions” in Section 209 of the ADPPA, specifically the exception in Section 209(a)(2)):

    With respect to covered data previously collected in accordance with this Act, notwithstanding this exception, to perform system maintenance, diagnostics, maintain a product or service for which such covered data was collected, conduct internal research or analytics to improve products and services, perform inventory management or network management, or debug or repair errors that impair the functionality of a service or product for which such covered data was collected by the covered entity, except such data shall not be transferred.

    While this provision mentions conducting “internal research or analytics to improve products and services,” it also refers to “a product or service for which such covered data was collected.” The concern here is that this could be interpreted as only allowing “research or analytics” in relation to existing products known to the data subject.

    The road ends here for personal data that the firm collects itself. Somewhat paradoxically, the firm could more easily make the case for using data obtained from a third party. Under Section 302(b) of the ADPPA, a firm only has to ensure that it is not processing “third party data for a processing purpose inconsistent with the expectations of a reasonable individual.” Such a relatively broad “reasonable expectations” basis is not available for data collected directly by first-party covered entities.

    Under the GDPR, aside from the data subject’s consent, the firm also could rely on its own “legitimate interest” as a lawful basis to process user data to develop new products. It is true, however, that due to requirements that the interests of the data controller and the data subject must be appropriately weighed, the “legitimate interest” basis is probably less popular in the EU than alternatives like consent or contractual necessity.

    Developing this path in the ADPPA would arguably provide a more sensible basis for data uses like the reuse of data for new product development. This could be superior even to express consent, which faces problems like “consent fatigue.” These are unlikely to be solved by promulgating detailed rules on “affirmative consent,” as proposed in Section 2 of the ADPPA.

    Problems with ‘de-identified data’

    Another example of significant confusion in the ADPPA’s the basic conceptual scheme is the bill’s notion of “de-identified data.” The drafters seemed to be aiming for a partial exemption from the default data-protection regime for datasets that no longer contain personally identifying information, but that are derived from datasets that once did. Instead of providing such an exemption, however, the rules for de-identified data essentially extend the ADPPA’s scope to nonpersonal data, while also creating a whole new set of problems.

    The basic problem is that the definition of “de-identified data” in the ADPPA is not limited to data derived from identifiable data. The definition covers: “information that does not identify and is not linked or reasonably linkable to an individual or a device, regardless of whether the information is aggregated.” In other words, it is the converse of “covered data” (personal data): whatever is not “covered data” is “de-identified data.” Even if some data are not personally identifiable and are not a result of a transformation of data that was personally identifiable, they still count as “de-identified data.” If this reading is correct, it creates an absurd result that sweeps all information into the scope of the ADPPA.

    For the sake of argument, let’s assume that this confusion can be fixed and that the definition of “de-identified data” is limited to data that is:

    1. derived from identifiable data, but
    2. that hold a possibility of re-identification (weaker than “reasonably linkable”) and
    3. are processed by the entity that previously processed the original identifiable data.

    Remember that we are talking about data that are not “reasonably linkable to an individual.” Hence, the intent appears to be that the rules on de-identified data would apply to non-personal data that would otherwise not be covered by the ADPPA.

    The rationale for this may be that it is difficult, legally and practically, to differentiate between personally identifiable data and data that are not personally identifiable. A good deal of seemingly “anonymous” data may be linked to an individual—e.g., by connecting the dataset at hand with some other dataset.

    The case for regulation in an example where a firm clearly dealt with personal data, and then derived some apparently de-identified data from them, may actually be stronger than in the case of a dataset that was never directly derived from personal data. But is that case sufficient to justify the ADPPA’s proposed rules?

    The ADPPA imposes several duties on entities dealing with “de-identified data” (that is, all data that are not considered “covered” data):

    1. to take “reasonable measures to ensure that the information cannot, at any point, be used to re-identify any individual or device”;
    2. to publicly commit “in a clear and conspicuous manner—
      1. to process and transfer the information solely in a de- identified form without any reasonable means for re- identification; and
      1. to not attempt to re-identify the information with any individual or device;”
    3. to “contractually obligate[] any person or entity that receives the information from the covered entity to comply with all of the” same rules.

    The first duty is superfluous and adds interpretative confusion, given that de-identified data, by definition, are not “reasonably linkable” with individuals.

    The second duty — public commitment — unreasonably restricts what can be done with nonpersonal data. Firms may have many legitimate reasons to de-identify data and then to re-identify them later. This provision would effectively prohibit firms from effective attempts at data minimization (resulting in de-identification) if those firms may at any point in the future need to link the data with individuals. It seems that the drafters had some very specific (and likely rare) mischief in mind here but ended up prohibiting a vast sphere of innocuous activity.

    Note that, for data to become “de-identified data,” they must first be collected and processed as “covered data” in conformity with the ADPPA and then transformed (de-identified) in such a way as to no longer meet the definition of “covered data.” If someone then re-identifies the data, this will again constitute “collection” of “covered data” under the ADPPA. At every point of the process, personally identifiable data is covered by the ADPPA rules on “covered data.”

    Finally, the third duty—“share alike” (to “contractually obligate[] any person or entity that receives the information from the covered entity to comply”)—faces a very similar problem as the second duty. Under this provision, the only way to preserve the option for a third party to identify the individuals linked to the data will be for the third party to receive the data in a personally identifiable form. In other words, this provision makes it impossible to share data in a de-identified form while preserving the possibility of re-identification. Logically speaking, we would have expected a possibility to share the data in a de-identified form; this would align with the principle of data minimization. What the ADPPA does instead is effectively to impose a duty to share de-identified personal data together with identifying information. This is a truly bizarre result, directly contrary to the principle of data minimization.

    Conclusion

    The basic conceptual structure of the legislation that subcommittee members will take up this week is, to a very significant extent, both confused and confusing. Perhaps in tomorrow’s markup, a more open and detailed discussion of what the drafters were trying to achieve could help to improve the scheme, as it seems that some key provisions of the current draft would lead to absurd results (e.g., those directly contrary to the principle of data minimization).

    Given that the GDPR is already a well-known point of reference, including for U.S.-based companies and privacy professionals, the ADPPA may do better to re-use the best features of the GDPR’s conceptual structure while cutting its excesses. Re-inventing the wheel by proposing new concepts did not work well in this ADPPA draft.

    Though details remain scant (and thus, any final judgment would be premature),  initial word on the new Trans-Atlantic Data Privacy Framework agreed to, in principle, by the White House and the European Commission suggests that it could be a workable successor to the Privacy Shield agreement that was invalidated by the Court of Justice of the European Union (CJEU) in 2020.

    This new framework agreement marks the third attempt to create a lasting and stable legal regime to permit the transfer of EU citizens’ data to the United States. In the wake of the 2013 revelations by former National Security Agency contractor Edward Snowden about the extent of the United States’ surveillance of foreign nationals, the CJEU struck down (in its 2015 Schrems decision) the then-extant “safe harbor” agreement that had permitted transatlantic data flows. 

    In the 2020 Schrems II decision (both cases were brought by Austrian privacy activist Max Schrems), the CJEU similarly invalidated the Privacy Shield, which had served as the safe harbor’s successor agreement. In Schrems II, the court found that U.S. foreign surveillance laws were not strictly proportional to the intelligence community’s needs and that those laws also did not give EU citizens adequate judicial redress.  

    This new “Privacy Shield 2.0” agreement, announced during President Joe Biden’s recent trip to Brussels, is intended to address the issues raised in the Schrems II decision. In relevant part, the joint statement from the White House and European Commission asserts that the new framework will: “[s]trengthen the privacy and civil liberties safeguards governing U.S. signals intelligence activities; Establish a new redress mechanism with independent and binding authority; and Enhance its existing rigorous and layered oversight of signals intelligence activities.”

    In short, the parties believe that the new framework will ensure that U.S. intelligence gathering is proportional and that there is an effective forum for EU citizens caught up in U.S. intelligence-gathering to vindicate their rights.

    As I and my co-authors (my International Center for Law & Economics colleague Mikołaj Barczentewicz and Michael Mandel of the Progressive Policy Institute) detailed in an issue brief last fall, the stakes are huge. While the issue is often framed in terms of social-media use, transatlantic data transfers are implicated in an incredibly large swath of cross-border trade:

    According to one estimate, transatlantic trade generates upward of $5.6 trillion in annual commercial sales, of which at least $333 billion is related to digitally enabled services. Some estimates suggest that moderate increases in data-localization requirements would result in a €116 billion reduction in exports from the EU.

    The agreement will be implemented on this side of the Atlantic by a forthcoming executive order from the White House, at which point it will be up to EU courts to determine whether the agreement adequately restricts U.S. intelligence activities and protects EU citizens’ rights. For now, however, it appears at a minimum that the White House took the CJEU’s concerns seriously and made the right kind of concessions to reach agreement.

    And now, once the framework is finalized, we just have to sit tight and wait for Mr. Schrems’ next case.

    After years of debate and negotiations, European Lawmakers have agreed upon what will most likely be the final iteration of the Digital Markets Act (“DMA”), following the March 24 final round of “trilogue” talks. 

    For the uninitiated, the DMA is one in a string of legislative proposals around the globe intended to “rein in” tech companies like Google, Amazon, Facebook, and Apple through mandated interoperability requirements and other regulatory tools, such as bans on self-preferencing. Other important bills from across the pond include the American Innovation and Choice Online Act, the ACCESS Act, and the Open App Markets Act

    In many ways, the final version of the DMA represents the worst possible outcome, given the items that were still up for debate. The Commission caved to some of the Parliament’s more excessive demands—such as sweeping interoperability provisions that would extend not only to “ancillary” services, such as payments, but also to messaging services’ basic functionalities. Other important developments include the addition of voice assistants and web browsers to the list of Core Platform Services (“CPS”), and symbolically higher “designation” thresholds that further ensure the act will apply overwhelmingly to just U.S. companies. On a brighter note, lawmakers agreed that companies could rebut their designation as “gatekeepers,” though it remains to be seen how feasible that will be in practice. 

    We offer here an overview of the key provisions included in the final version of the DMA and a reminder of the shaky foundations it rests on.

    Interoperability

    Among the most important of the DMA’s new rules concerns mandatory interoperability among online platforms. In a nutshell, digital platforms that are designated as “gatekeepers” will be forced to make their services “interoperable” (i.e., compatible) with those of rivals. It is argued that this will make online markets more contestable and thus boost consumer choice. But as ICLE scholars have been explaining for some time, this is unlikely to be the case (here, here, and here). Interoperability is not the panacea EU legislators claim it to be. As former ICLE Director of Competition Policy Sam Bowman has written, there are many things that could be interoperable, but aren’t. The reason is that interoperability comes with costs as well as benefits. For instance, it may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to the market and for consumers to be able to choose among them. Economists Michael L. Katz and Carl Shapiro concur:

    Although compatibility has obvious benefits, obtaining and maintaining compatibility often involves a sacrifice in terms of product variety or restraints on innovation.

    There are other potential downsides to interoperability.  For instance, a given set of interoperable standards might be too costly to implement and/or maintain; it might preclude certain pricing models that increase output; or it might compromise some element of a product or service that offers benefits specifically because it is not interoperable (such as, e.g., security features). Consumers may also genuinely prefer closed (i.e., non-interoperable) platforms. Indeed: “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play that belie simplistic characterizations of one being inherently superior to the other. 

    Further, as Sam Bowman observed, narrowing choice through a more curated experience can also be valuable for users, as it frees them from having to research every possible option every time they buy or use some product (if you’re unconvinced, try turning off your spam filter for a couple of days). Instead, the relevant choice consumers exercise might be in choosing among brands. In sum, where interoperability is a desirable feature, consumer preferences will tend to push for more of it. However, it is fundamentally misguided to treat mandatory interoperability as a cure-all elixir or a “super tool” of “digital platform governance.” In a free-market economy, it is not—or, it should not—be up to courts and legislators to substitute for businesses’ product-design decisions and consumers’ revealed preferences with their own, based on diffuse notions of “fairness.” After all, if we could entrust such decisions to regulators, we wouldn’t need markets or competition in the first place.

    Of course, it was always clear that the DMA would contemplate some degree of mandatory interoperability – indeed, this was arguably the new law’s biggest selling point. What was up in the air until now was the scope of such obligations. The Commission had initially pushed for a comparatively restrained approach, requiring interoperability “only” in ancillary services, such as payment systems (“vertical interoperability”). By contrast, the European Parliament called for more expansive requirements that would also encompass social-media platforms and other messaging services (“horizontal interoperability”). 

    The problem with such far-reaching interoperability requirements is that they are fundamentally out of pace with current privacy and security capabilities. As ICLE Senior Scholar Mikolaj Barczentewicz has repeatedly argued, the Parliament’s insistence on going significantly beyond the original DMA’s proposal and mandating interoperability of messaging services is overly broad and irresponsible. Indeed, as Mikolaj notes, the “likely result is less security and privacy, more expenses, and less innovation.”The DMA’s defensers would retort that the law allows gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament) to protect safety and privacy (it is not yet clear which wording the final version has adopted). Either way, however, the standard may be too high and companies may very well offer lower security to avoid liability for adopting measures that would be judged by the Commission and the courts as going beyond what is “strictly necessary” or “indispensable.” These safeguards will inevitably be all the more indeterminate (and thus ineffectual) if weighed against other vague concepts at the heart of the DMA, such as “fairness.”

    Gatekeeper Thresholds and the Designation Process

    Another important issue in the DMA’s construction concerns the designation of what the law deems “gatekeepers.” Indeed, the DMA will only apply to such market gatekeepers—so-designated because they meet certain requirements and thresholds. Unfortunately, the factors that the European Commission will consider in conducting this designation process—revenues, market capitalization, and user base—are poor proxies for firms’ actual competitive position. This is not surprising, however, as the procedure is mainly designed to ensure certain high-profile (and overwhelmingly American) platforms are caught by the DMA.

    From this perspective, the last-minute increase in revenue and market-capitalization thresholds—from 6.5 billion euros to 7.5 billion euros, and from 65 billion euros to 75 billion euros, respectively—won’t change the scope of the companies covered by the DMA very much. But it will serve to confirm what we already suspected: that the DMA’s thresholds are mostly tailored to catch certain U.S. companies, deliberately leaving out EU and possibly Chinese competitors (see here and here). Indeed, what would have made a difference here would have been lowering the thresholds, but this was never really on the table. Ultimately, tilting the European Union’s playing field against its top trading partner, in terms of exports and trade balance, is economically, politically, and strategically unwise.

    As a consolation of sorts, it seems that the Commission managed to squeeze in a rebuttal mechanism for designated gatekeepers. Imposing far-reaching obligations on companies with no  (or very limited) recourse to escape the onerous requirements of the DMA would be contrary to the basic principles of procedural fairness. Still, it remains to be seen how this mechanism will be articulated and whether it will actually be viable in practice.

    Double (and Triple?) Jeopardy

    Two recent judgments from the European Court of Justice (ECJ)—Nordzucker and bpost—are likely to underscore the unintended effects of cumulative application of both the DMA and EU and/or national competition laws. The bpost decision is particularly relevant, because it lays down the conditions under which cases that evaluate the same persons and the same facts in two separate fields of law (sectoral regulation and competition law) do not violate the principle of ne bis in idem, also known as “double jeopardy.” As paragraph 51 of the judgment establishes:

    1. There must be precise rules to determine which acts or omissions are liable to be subject to duplicate proceedings;
    2. The two sets of proceedings must have been conducted in a sufficiently coordinated manner and within a similar timeframe; and
    3. The overall penalties must match the seriousness of the offense. 

    It is doubtful whether the DMA fulfills these conditions. This is especially unfortunate considering the overlapping rules, features, and goals among the DMA and national-level competition laws, which are bound to lead to parallel procedures. In a word: expect double and triple jeopardy to be hotly litigated in the aftermath of the DMA.

    Of course, other relevant questions have been settled which, for reasons of scope, we will have to leave for another time. These include the level of fines (up to 10% worldwide revenue, or 20% in the case of repeat offenses); the definition and consequences of systemic noncompliance (it seems that the Parliament’s draconian push for a general ban on acquisitions in case of systemic noncompliance has been dropped); and the addition of more core platform services (web browsers and voice assistants).

    The DMA’s Dubious Underlying Assumptions

    The fuss and exhilaration surrounding the impending adoption of the EU’s most ambitious competition-related proposal in decades should not obscure some of the more dubious assumptions which underpin it, such as that:

    1. It is still unclear that intervention in digital markets is necessary, let alone urgent.
    2. Even if it were clear, there is scant evidence to suggest that tried and tested ex post instruments, such as those envisioned in EU competition law, are not up to the task.
    3. Even if the prior two points had been established beyond any reasonable doubt (which they haven’t), it is still far from clear that DMA-style ex ante regulation is the right tool to address potential harms to competition and to consumers that arise in digital markets.

    It is unclear that intervention is necessary

    Despite a mounting moral panic around and zealous political crusading against Big Tech (an epithet meant to conjure antipathy and distrust), it is still unclear that intervention in digital markets is necessary. Much of the behavior the DMA assumes to be anti-competitive has plausible pro-competitive justifications. Self-preferencing, for instance, is a normal part of how platforms operate, both to improve the value of their core products and to earn returns to reinvest in their development. As ICLE’s Dirk Auer points out, since platforms’ incentives are to maximize the value of their entire product ecosystem, those that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product (the example of Facebook’s integration of Instagram is a case in point). Thus, while self-preferencing may, in some cases, be harmful, a blanket presumption of harm is thoroughly unwarranted

    Similarly, the argument that switching costs and data-related increasing returns to scale (in fact, data generally entails diminishing returns) have led to consumer lock-in and thereby raised entry barriers has also been exaggerated to epic proportions (pun intended). As we have discussed previously, there are plenty of counterexamples where firms have easily overcome seemingly “insurmountable” barriers to entry, switching costs, and network effects to disrupt incumbents. 

    To pick a recent case: how many of us had heard of Zoom before the pandemic? Where was TikTok three years ago? (see here for a multitude of other classic examples, including Yahoo and Myspace).

    Can you really say, with a straight face, that switching costs between messaging apps are prohibitive? I’m not even that active and I use at least six such apps on a daily basis: Facebook Messenger, Whatsapp, Instagram, Twitter, Viber, Telegram, and Slack (it took me all of three minutes to download and start using Slack—my newest addition). In fact, chances are that, like me, you have always multihomed nonchalantly and had never even considered that switching costs were impossibly high (or that they were a thing) until the idea that you were “locked-in” by Big Tech was drilled into your head by politicians and other busybodies looking for trophies to adorn their walls.

    What about the “unprecedented,” quasi-fascistic levels of economic concentration? First, measures of market concentration are sometimes anchored in flawed methodology and market definitions  (see, e.g., Epic’s insistence that Apple is a monopolist in the market for operating systems, conveniently ignoring that competition occurs at the smartphone level, where Apple has a worldwide market share of 15%—see pages 45-46 here). But even if such measurements were accurate, high levels of concentration don’t necessarily mean that firms do not face strong competition. In fact, as Nicolas Petit has shown, tech companies compete vigorously against each other across markets.

    But perhaps the DMA’s raison d’etre rests less on market failure, but rather on a legal or enforcement failure? This, too, is misguided.

    EU competition law is already up to the task

    As Giuseppe Colangelo has argued persuasively (here and here), it is not at all clear that ex post competition regulation is insufficient to tackle anti-competitive behavior in the digital sector:

    Ongoing antitrust investigations demonstrate that standard competition law still provides a flexible framework to scrutinize several practices described as new and peculiar to app stores. 

    The recent Google Shopping decision, in which the Commission found that Google had abused its dominant position by preferencing its own online-shopping service in Google Search results, is a case in point (the decision was confirmed by the General Court and is now pending review before the European Court of Justice). The “self-preferencing” category has since been applied by other EU competition authorities. The Italian competition authority, for instance, fined Amazon 1 billion euros for preferencing its own distribution service, Fulfilled by Amazon, on the Amazon marketplace (i.e., Amazon.it). Thus, Article 102, which includes prohibitions on “applying dissimilar conditions to similar transactions,” appears sufficiently flexible to cover self-preferencing, as well as other potentially anti-competitive offenses relevant to digital markets (e.g., essential facilities).

    For better or for worse, EU competition law has historically been sufficiently pliable to serve a range of goals and values. It has also allowed for experimentation and incorporated novel theories of harm and economic insights. Here, the advantage of competition law is that it allows for a more refined, individualized approach that can avoid some of the pitfalls of applying a one-size fits all model across all digital platforms. Those pitfalls include: harming consumers, jeopardizing the business models of some of the most successful and pro-consumer companies in existence, and ignoring the differences among platforms, such as between Google and Apple’s app stores. I turn to these issues next.

    Ex ante regulation probably isn’t the right tool

    Even if it were clear that intervention is necessary and that existing competition law was insufficient, it is not clear that the DMA is the right regulatory tool to address any potential harms to competition and consumers that may arise in the digital markets. Here, legislators need to be wary of unintended consequences, trade-offs, and regulatory fallibility. For one, It is possible that the DMA will essentially consolidate the power of tech platforms, turning them into de facto public utilities. This will not foster competition, but rather will make smaller competitors systematically dependent on so-called gatekeepers. Indeed, why become the next Google if you can just free ride off of the current Google? Why download an emerging messaging app if you can already interact with its users through your current one? In a way, then, the DMA may become a self-fulfilling prophecy. 

    Moreover, turning closed or semi-closed platforms such as the iOS into open platforms more akin to Android blurs the distinctions among products and dampens interbrand competition. It is a supreme paradox that interoperability and sideloading requirements purportedly give users more choice by taking away the option of choosing a “walled garden” model. As discussed above, overriding the revealed preferences of millions of users is neither pro-competitive nor pro-consumer (but it probably favors some competitors at the expense of those two things). 

    Nor are many of the other obligations contemplated in the DMA necessarily beneficial to consumers. Do users really not want to have default apps come preloaded on their devices and instead have to download and install them manually? Ditto for operating systems. What is the point of an operating system if it doesn’t come with certain functionalities, such as a web browser? What else should we unbundle—keyboard on iOS? Flashlight? Do consumers really want to choose from dozens of app stores when turning on their new phone for the first time? Do they really want to have their devices cluttered with pointless split-screens? Do users really want to find all their contacts (and be found by all their contacts) across all messaging services? (I switched to Viber because I emphatically didn’t.) Do they really want to have their privacy and security compromised because of interoperability requirements?Then there is the question of regulatory fallibility. As Alden Abott has written on the DMA and other ex ante regulatory proposals aimed at “reining in” tech companies:

    Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example). 

    This brings us back to the second point: without evidence that antitrust law is “not up to the task,” far-reaching and untested regulatory initiatives with potentially high error costs are put forth as superior to long-established, consumer-based antitrust enforcement. Yes, antitrust may have downsides (e.g., relative indeterminacy and slowness), but these pale in comparison to the DMA’s (e.g., large error costs resulting from high information requirements, rent-seeking, agency capture).

    Conclusion

    The DMA is an ambitious piece of regulation purportedly aimed at ensuring “fair and open digital markets.” This implies that markets are not fair and open; or that they risk becoming unfair and closed absent far-reaching regulatory intervention at EU level. However, it is unclear to what extent such assumptions are borne out by the reality of markets. Are digital markets really closed? Are they really unfair? If so, is it really certain that regulation is necessary? Has antitrust truly proven insufficient? It also implies that DMA-style ex ante regulation is necessary to tackle it, and that the costs won’t outweigh the benefits. These are heroic assumptions that have never truly been seriously put to the test. 

    Considering such brittle empirical foundations, the DMA was always going to be a contentious piece of legislation. However, there was always the hope that EU legislators would show restraint in the face of little empirical evidence and high error costs. Today, these hopes have been dashed. With the adoption of the DMA, the Commission, Council, and the Parliament have arguably taken a bad piece of legislation and made it worse. The interoperability requirements in messaging services, which are bound to be a bane for user privacy and security, are a case in point.

    After years trying to anticipate the whims of EU legislators, we finally know where we’re going, but it’s still not entirely sure why we’re going there.

    As the European Union’s Digital Markets Act (DMA) has entered the final stage of its approval process, one matter the inter-institutional negotiations appears likely to leave unresolved is how the DMA’s the relationship with competition law affects the very rationale and legal basis for the intervention. 

    The DMA is explicitly grounded on the questionable assumption that competition law alone is insufficient to rein in digital gatekeepers. Accordingly, EU lawmakers have declared the proposal to be a necessary regulatory intervention that will complement antitrust rules by introducing a set of ex ante obligations.

    To support this line of reasoning, the DMA’s drafters insist that it protects a different legal interest from antitrust. Indeed, the intervention is ostensibly grounded in Article 114 of the Treaty on the Functioning of the European Union (TFEU), rather than Article 103—the article that spells out the implementation of competition law. Pursuant to Article 114, the DMA opts for centralized enforcement at the EU level to ensure harmonized implementation of the new rules.

    It has nonetheless been clear from the very beginning that the DMA lacks a distinct purpose. Indeed, the interests it nominally protects (the promotion of fairness and contestability) do not differ from the substance and scope of competition law. The European Parliament has even suggested that the law’s aims should also include fostering innovation and increasing consumer welfare, which also are within the purview of competition law. Moreover, the DMA’s obligations focus on practices that have already been the subject of past and ongoing antitrust investigations.

    Where the DMA differs in substance from competition law is simply that it would free enforcers from the burden of standard antitrust analysis. The law is essentially a convenient shortcut that would dispense with the need to define relevant markets, prove dominance, and measure market effects (see here). It essentially dismisses economic analysis and the efficiency-oriented consumer welfare test in order to lower the legal standards and evidentiary burdens needed to bring an investigation.

    Acknowledging the continuum between competition law and the DMA, the European Competition Network and some member states (self-appointed as “friends of an effective DMA”) have proposed empowering national competition authorities (NCAs) to enforce DMA obligations.

    Against this background, my new ICLE working paper pursues a twofold goal. First, it aims to show how, because of its ambiguous relationship with competition law, the DMA falls short of its goal of preventing regulatory fragmentation. Moreover, despite having significant doubts about the DMA’s content and rationale, I argue that fully centralized enforcement at the EU level should be preserved and that frictions with competition law would be better confined by limiting the law’s application to a few large platforms that are demonstrably able to orchestrate an ecosystem.

    Welcome to the (Regulatory) Jungle

    The DMA will not replace competition rules. It will instead be implemented alongside them, creating several overlapping layers of regulation. Indeed, my paper broadly illustrates how the very same practices that are targeted by the DMA may also be investigated by NCAs under European and national-level competition laws, under national competition laws specific to digital markets, and under national rules on economic dependence.

    While the DMA nominally prohibits EU member states from imposing additional obligations on gatekeepers, member states remain free to adapt their competition laws to digital markets in accordance with the leeway granted by Article 3(3) of the Modernization Regulation. Moreover, NCAs may be eager to exploit national rules on economic dependence to tackle perceived imbalances of bargaining power between online platforms and their business counterparties.

    The risk of overlap with competition law is also fostered by the DMA’s designation process, which may further widen the law’s scope in the future in terms of what sorts of digital services and firms may fall under the law’s rubric. As more and more industries explore platform business models, the DMA would—without some further constraints on its scope—be expected to cover a growing number of firms, including those well outside Big Tech or even native tech companies.

    As a result, the European regulatory landscape could become even more fragmented in the post-DMA world. The parallel application of the DMA and antitrust rules poses the risks of double jeopardy (see here) and of conflicting decisions.

    A Fully Centralized and Ecosystem-Based Regulatory Regime

    To counter the risk that digital-market activity will be subject to regulatory double jeopardy and conflicting decisions across EU jurisdictions, DMA enforcement should not only be fully centralized at the EU level, but that centralization should be strengthened. This could be accomplished by empowering the Commission with veto rights, as was requested by the European Parliament.

    This veto power should certainly extend to national measures targeting gatekeepers that run counter to the DMA or to decisions adopted by the Commission under the DMA. But it should also include prohibiting national authorities from carrying out investigations on their own initiative without prior authorization by the Commission.

    Moreover, it will also likely be necessary to significantly redefine the DMA’s scope. Notably, EU leaders could mitigate the risk of fragmentation from the DMA’s frictions with competition law by circumscribing the law to ecosystem-related issues. This would effectively limit its application to a few large platforms who are demonstrably able to orchestrate an ecosystem. It also would reinstate the DMA’s original justification, which was to address the emergence of a few large platforms who are able act as gatekeepers and enjoy an entrenched position as a result of conglomerate ecosystems.

    Changes to the designation process should also be accompanied by confining the list of ex ante obligations the law imposes. These should reflect relevant differences in platforms’ business models and be tailored to the specific firm under scrutiny, rather than implementing a one-size-fits-all approach.

    There are compelling arguments against the policy choice to regulate platforms and their ecosystems like utilities. The suggested adaptations would at least acknowledge the regulatory nature of the DMA, removing the suspicion that it is just an antitrust intervention vested by regulation.