Archives For Efficiencies

What should a competition law for 21st century look like? This point is debated across many jurisdictions. The Digital Markets, Competition, and Consumers Bill (DMCC) would change UK competition law’s approach to large platforms. The bill’s core point is to place the UK Competition and Markets Authority’s (CMA) Digital Markets Unit (DMU) on a statutory footing with relaxed evidentiary standards to regulate so-called “Big Tech” firms more easily. This piece considers some areas to watch as debate regarding the bill unfold.

Standards of Evidence for Appeals

Since the Magna Carta, the question of evidence for government action has been at the heart of regulation. In that case, of course, a jury of peers decided what the government can do. What is the equivalent rule under the DMCC?

The bill contains a judicial-review standard for challenges to DMU evidence. This amounts to a hands-off approach, quite some distance from the field in Runnymede where King John signed Magna Carta. It is, instead, the social-democratic view that an elite of regulators ought to be empowered for the perceived greater good, subject only to checking that they are within the scope of empowerments and that there is a rational connection between those powers and the decision made. There is, in other words, no jury of peers. On the contrary, there is a panel of experts. And on this view, experts decide what policy to pursue and weigh up the evidence for regulation.

This would be wonderful in a world where everyone could always be trusted. But there are risks in this generosity, as it would also allow even quite weak evidence to prevail. For every Queen Elizabeth II, there is a King John. What happens if a future King John takes over a DMU case? Could those affected by weak evidence standards, or capricious interpretations, push back?

That will not be easy. The risk derives from the classic Wednesbury case, which is the starting point for judicial review of agency action in the UK. The case has similarities to Chevron review in the United States, but without the subsequent developments like the analysis of whether policy is properly promulgated to the agencies, following West Virginia v EPA.

Wednesbury requires a determination to be proven irrational before a court can overturn it. This is a very high bar and amounts to only a sanity test. Black cannot be white, but all shades of grey must be accepted by the court, even if the evidence points strongly against the interpretation. For example, consider the question: is there daylight? There is a great difference between an overcast day, and a sunny day, and among early dawn, midday, and late dusk. Yet on a Wednesbury approach, even the latest daylight hour of the darkest day must be called “sunlight” as, yes, there is daylight. It is essentially a tick-box approach. It trusts the regulator completely on policy: in this case, what counts as bright enough to be called daylight.

At some level, this posture barely trusts the courts at all. It thus foregoes major checks and balances that can helpfully come from the courts. It is myopic, in that sometimes a fresh and neutral pair of eyes is important to ensure sensible, reasonable, and effective approaches. All of us have sometimes focused on a tree and not seen the forest. It can be helpful for a concerned friend to tell us that, provided that the friend is fair, reasonable, and makes the comment based on evidence—and gives us a chance to defend our decision to look only at particular trees.

There has been no suggestion that this fair play is lacking from UK courts, so the bill’s hostility to the tribunal’s role is puzzling. Surely, the DMCC’s intention is not to say: leave me alone when you think I am going wrong?

This has already been criticised in influential commentary, e.g., Florian Mueller’s influential FOSS Patents blog post on the CMA’s recent decision to block the merger of Microsoft and Activision. It is the core reason for the active positions in both the Activision case and the earlier Meta/Giphy case in which, despite a CMA loss on procedural aspects, all policy grounds and evidentiary interpretation withstood challenge.

This will have major implications for worldwide deals and worldwide business practices, not least as it could effectively displace decisions by other jurisdictions to assess evidence more closely, or not to regulate certain aspects of conduct.

There is also the important point that courts’ ability to review evidence has sometimes been very positive. In a nutshell, if the case for regulation is strong, then there should be no problem in the review of evidence by a neutral third party. This can be seen in the leading case on appeal standards in UK telecoms regulation, BT and CityFibre v Ofcom, which—prior to the move to judicial review for such cases—involved deregulation to help encourage innovation in regional business centers (Leeds, Manchester, Birmingham, etc.).

Overreach by Ofcom—in the form of a predatory low-price cap—was holding back regional business development, because it was not profitable to invest in higher value but also higher price next-generation communications systems. This was overturned because of the use of an appeal standard pointing out errors in the evidence base; notably, a requirement for there to be as many as five rivals in an area before it was to be considered competitive, which simply contradicted relevant customer evidence. It is very unlikely that this helpful result would have obtained had the matter been one for hands-off judicial review.

Balance of Evidence

Closely related to the first point on judicial review is the question of affirmative evidence standards. Even under a judicial-review standard, the DMU must still apply the factors in the bill. There are significant framings of evidence in the DMCC.

The designation process determines whether companies can be regulated

This emphasises scale. A worry here might be that scale alone displaces the analysis of affirmative evidence—i.e., “big is bad” analysis. What if, as in the title of the recent provocative book, sometimes Big is Beautiful? That thought seems to be lacking from bill (see s.6(1)(a)). As there is a scenario where companies are large, but still competitively constrained, it would be helpful to consider the consumer impacts at the designation stage. There is no business regulating a company just because it is large if the outcomes are good.

The framing of the countervailing benefit exemption (s.29)

The bill seeks to provide voice to consumer impacts, but the bar is set high. There must be proof of indispensable conduct required for, and proportionate to, a consumer benefit, under s.29(2)(c).

This reverses the burden of proof; companies must prove that they benefit consumers. Normally, this is simply left to the market, unless there is market power. You and I buy products in the marketplace, and this is how consumer benefit is assessed.

In a scenario where this cannot be proven, s.20 would allow conduct orders to require “fair and reasonable terms” (s.20(2)(a)). It does not say to whom or according to whom. This risks allowing the DMU to require reasonable treatment of other businesses, unless the defendant company can prove that consumers benefit. There are strong arguments that this risks harming consumers for the sake of less-efficient competitors.

The consumer evidence aspect of the PCIs

S.44(2) allowsbut certainly does not mandate considering consumer benefits before imposing a pro-competition intervention (PCI). Under s.49(1), such a PCI would have the sweeping market investigation powers in Schedule 8 of the Enterprise Act 2002, which extend to rewriting contracts (Sch 8, rule 2), setting prices (Sch 8, rules 7 and 8) or even to even breaking up companies (Sch 8, rules 12 and 13). It is therefore essential that the evidence base be specified more precisely. There must be a clear link back to the concern that gave rise to the PCI and why the PCI would improve it. There is reference to the ability to test remedies in s.49(3) and (4), but this is not mandatory. Without stronger evidentiary requirements, the PCIs risk discretionary government control over large companies.

Given the breadth of these powers, it would be helpful to require affirmative evidence in relation to asserted entry barriers and competitive foreclosure. If there is truly a desire to dilute the current evidence standards, then what remains could still be specified. Not specifically requiring evidence of impacts on entry and foreclosure, as in the current proposal, is unwise.

Prohibited Conduct

The contemplated codes of conduct could have far-reaching consequences. Risks include inadvertent prohibitions on the development of new products and requirements to stop product development where there is an impact on rivals. See especially s.20(3)(b) (own-product preference), and (h) (impeding switching to others), which arguably could encompass even pro-competitive product integration. There is an acute need for clarification here, as product development and product integration frequently affect rivals, but it is also important for consumers and other innovative businesses.

It is risky to use overly broad definitions here (e.g., “restricting interoperability”) without saying more about what makes for stronger or weaker cases for interoperation (both scenarios exist). Interoperability is important, but evidence relating to it would benefit from definition. Otherwise:

  • Bill s.20(3)(e) could well capture improvements to product design;
  • Weasel words like “unfair” use of data (s.20(3)(g)) and “users’… own best interests [according to the DMU]” (s.20(2)(e)) are ripe for rent-seeking; and
  • The vague reference to “using data unfairly” in s.20(3)(g) could be abused to intervene in data-driven markets on an unprincipled basis.

For example, the data provision could easily be used to hobble ad-funded business models that compete with legacy providers. There are tensions here with the stated aim of the legislative consultation, which was to promote, and not to inhibit, innovation.

A simple remediation here would be to apply a balance-of-evidence test reading on consumer impact, as currently happens with “grey list” consumer-protection matters: the worst risks are “blacklisted” (e.g., excluding liability for death) but more equivocal practices (hidden terms, etc.) are “grey listed.” They are illegal, but only where shown, on balance, to be harmful. That simple change would address many of the evidence concerns, as the structure for evidence weighing would be clarified.

Process

The multi-phase due-process protections of the mergers and market-investigations regimes are notably lacking from the conduct and PCI frameworks. For example, a merger matter uses different teams and different timeframes for the initial and final determinations of whether a merger can proceed.

This absence is no surprise, as a major reform elsewhere in the DMCC is to overturn the Competition Appeal Tribunal decision in Apple v CMA, where the CMA had not applied market-investigation timing requirements as interpreted by the Competition Appeal Tribunal, and thus failed statutory timing requirements. The time limits there are designed to prevent multiple bites of the cherry and to prevent strategic use of protracted threats of investigation.

The bill would allow the CMA more flexibility than under the existing market-investigation regime. Is the CMA really asking to change the law, having failed to abide by due-process requirements in an existing one? That would be a bit like asking for a new chair, having refused to sit on a vacant chair right in front of you. Unless this is clarified, the proposal could be misread as a due-process exemption, precisely because the DMU does not want to give due process.

The DMCC’s proponents will argue that the designation process provides timeframes and a first phase element in the cases of “strategic market status” (SMS) firms, with conduct and PCI regulation to follow only if a designation occurs. This, however, overlooks a crucial element: the designation process is effectively a bill of attainder, aimed at particular companies. Where, then, are the due-process rights for those affected? Logically, protections should therefore exceed those in the Enterprise Act market-investigation setting, as those are marketwide, whereas DMU action is aimed at particular firms.

A very sensible check and balance here would be for the DMU to have to make a recommendation for another CMA team to review, as is common in merger-clearance matters.

Benchmarking and Reviews

The proposal contains requirements for review (e.g., s.35 on review of conduct enforcement). The requirements are, however, relatively weak. They amount to an in-house review with no clear framework. There is a very strong argument for a different body to review the work and to prevent mission creep. This may even be welcome to the DMU, as it outsources review work.

The standard for review (e.g., benefits to end users) ought to be clearly specified. The vague reference to “effectiveness” is not this, and has more in common with EU law (e.g., Toshiba) where “effectiveness” of regulation is determined chiefly by the state, and not by the law. (The holding in Toshiba being that of several interpretations, the state is entitled to the most “effective” one, according to… the state.) To the extent that one hopes that the common law regulatory tradition differs, it is puzzling to see the persistence of this statist approach following UK independence from the EU. Entick v Carrington, the DMCC is not.

Other important benchmarking includes reviews of the work of other jurisdictions. For example, the DMU ought not to be given powers that exceed those of EU regulators. Yet arguably, the current proposal does exactly this by omitting some of the structured evidence points in the EU’s Digital Markets Act. There is also a need to ensure international-comity considerations are given due weight, given the broad jurisdictional tests (s.4: UK users, business, or effect). Others—including, notably, jurisdictions from which the largest companies originate—may make different decisions to regulate or not to regulate.

In the case of UK-U.S. relationship, there have been some historic disagreements to this effect. For example, is the DMU really to be the George III of the 21st century, telling U.S. business what to do from across the sea? It is doubtful that this is intended, yet some of the commitments packages already have worldwide effect. Some in America might just say: “No more kings!”

Those with a long memory will remember how strenuously the UK government pushed back on perceived U.S. overreach the other way, notably in the Freddie Laker v British Airways antitrust litigation of the 1980s, and in the 1990s, in the amicus brief submitted by the UK government in Hartford Fire Insurance v California—at the U.S. Supreme Court, no less. It is surely not intended that the UK objected to de facto U.S. and Californian regulation of Lloyds of London, yet wishes to regulate U.S. tech giants on a de facto worldwide basis under UK law?

Public opinion will not take kindly to that type of inconsistency. To the extent that Parliament does not intend worldwide regulation—a sort of British Empire of Big Tech regulation—the extent of the powers ought to be clarified. Indeed, attempting worldwide regulation would very predictably fail (e.g., arms races in regulation between the DMU and EU Commission). An EU-UK regulation race would help nobody, and it can still be avoided by attention to constructive comity considerations.

As the DMCC makes its way through parliamentary committees, those with views on these points will have an excellent opportunity to make themselves known, just as the CMA has done in recent global deals.

At the Jan. 26 Policy in Transition forum—the Mercatus Center at George Mason University’s second annual antitrust forum—various former and current antitrust practitioners, scholars, judges, and agency officials held forth on the near-term prospects for the neo-Brandeisian experiment undertaken in recent years by both the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ). In conjunction with the forum, Mercatus also released a policy brief on 2022’s significant antitrust developments.

Below, I summarize some of the forum’s noteworthy takeaways, followed by concluding comments on the current state of the antitrust enterprise, as reflected in forum panelists’ remarks.

Takeaways

    1. The consumer welfare standard is neither a recent nor an arbitrary antitrust-enforcement construct, and it should not be abandoned in order to promote a more “enlightened” interventionist antitrust.

George Mason University’s Donald Boudreaux emphasized in his introductory remarks that the standard goes back to Adam Smith, who noted in “The Wealth of Nations” nearly 250 years ago that the appropriate end of production is the consumer’s benefit. Moreover, American Antitrust Institute President Diana Moss, a leading proponent of more aggressive antitrust enforcement, argued in standalone remarks against abandoning the consumer welfare standard, as it is sufficiently flexible to justify a more interventionist agenda.

    1. The purported economic justifications for a far more aggressive antitrust-enforcement policy on mergers remain unconvincing.

Moss’ presentation expressed skepticism about vertical-merger efficiencies and called for more aggressive challenges to such consolidations. But Boudreaux skewered those arguments in a recent four-point rebuttal at Café Hayek. As he explains, Moss’ call for more vertical-merger enforcement ignores the fact that “no one has stronger incentives than do the owners and managers of firms to detect and achieve possible improvements in operating efficiencies – and to avoid inefficiencies.”

Moss’ complaint about chronic underenforcement mistakes by overly cautious agencies also ignores the fact that there will always be mistakes, and there is no reason to believe “that antitrust bureaucrats and courts are in a position to better predict the future [regarding which efficiencies claims will be realized] than are firm owners and managers.” Moreover, Moss provided “no substantive demonstration or evidence that vertical mergers often lead to monopolization of markets – that is, to industry structures and practices that harm consumers. And so even if vertical mergers never generate efficiencies, there is no good argument to use antitrust to police such mergers.”

And finally, Boudreaux considers Moss’ complaint that a court refused to condemn the AT&T-Time Warner merger, arguing that this does not demonstrate that antitrust enforcement is deficient:

[A]s soon as the  . . . merger proved to be inefficient, the parties themselves undid it. This merger was undone by competitive market forces and not by antitrust! (Emphasis in the original.)

    1. The agencies, however, remain adamant in arguing that merger law has been badly unenforced. As such, the new leadership plans to charge ahead and be willing to challenge more mergers based on mere market structure, paying little heed to efficiency arguments or actual showings of likely future competitive harm.

In her afternoon remarks at the forum, Principal Deputy Assistant U.S. Attorney General for Antitrust Doha Mekki highlighted five major planks of Biden administration merger enforcement going forward.

  • Clayton Act Section 7 is an incipiency statute. Thus, “[w]hen a [mere] change in market structure suggests that a firm will have an incentive to reduce competition, that should be enough [to justify a challenge].”
  • “Once we see that a merger may lead to, or increase, a firm’s market power, only in very rare circumstances should we think that a firm will not exercise that power.”
  • A structural presumption “also helps businesses conform their conduct to the law with more confidence about how the agencies will view a proposed merger or conduct.”
  • Efficiencies defenses will be given short shrift, and perhaps ignored altogether. This is because “[t]he Clayton Act does not ask whether a merger creates a more or less efficient firm—it asks about the effect of the merger on competition. The Supreme Court has never recognized efficiencies as a defense to an otherwise illegal merger.”
  • Merger settlements have often failed to preserve competition, and they will be highly disfavored. Therefore, expect a lot more court challenges to mergers than in recent decades. In short, “[w]e must be willing to litigate. . . . [W]e need to acknowledge the possibility that sometimes a court might not agree with us—and yet go to court anyway.”

Mekki’s comments suggest to me that the soon-to-be-released new draft merger guidelines may emphasize structural market-share tests, generally reject efficiencies justifications, and eschew the economic subtleties found in the current guidelines.

    1. The agencies—and the FTC, in particular—have serious institutional problems that undermine their effectiveness, and risk a loss of credibility before the courts in the near future.

In his address to the forum, former FTC Chairman Bill Kovacic lamented the inefficient limitations on reasoned FTC deliberations imposed by the Sunshine Act, which chills informal communications among commissioners. He also pointed to our peculiarly unique global status of having two enforcers with duplicative antitrust authority, and lamented the lack of policy coherence, which reflects imperfect coordination between the agencies.

Perhaps most importantly, Kovacic raised the specter of the FTC losing credibility in a possible world where Humphrey’s Executor is overturned (see here) and the commission is granted little judicial deference. He suggested taking lessons on policy planning and formulation from foreign enforcers—the United Kingdom’s Competition and Markets Authority, in particular. He also decried agency officials’ decisions to belittle prior administrations’ enforcement efforts, seeing it as detracting from the international credibility of U.S. enforcement.

    1. The FTC is embarking on a novel interventionist path at odds with decades of enforcement policy.

In luncheon remarks, Commissioner Christine S. Wilson lamented the lack of collegiality and consultation within the FTC. She warned that far-reaching rulemakings and other new interventionist initiatives may yield a backlash that undermines the institution.

Following her presentation, a panel of FTC experts discussed several aspects of the commission’s “new interventionism.” According to one panelist, the FTC’s new Section 5 Policy Statement on Unfair Methods of Competition (which ties “unfairness” to arbitrary and subjective terms) “will not survive in” (presumably, will be given no judicial deference by) the courts. Another panelist bemoaned rule-of-law problems arising from FTC actions, called for consistency in FTC and DOJ enforcement policies, and warned that the new merger guidelines will represent a “paradigm shift” that generates more business uncertainty.

The panel expressed doubts about the legal prospects for a proposed FTC rule on noncompete agreements, and noted that constitutional challenges to the agency’s authority may engender additional difficulties for the commission.

    1. The DOJ is greatly expanding its willingness to litigate, and is taking actions that may undermine its credibility in court.

Assistant U.S. Attorney General for Antitrust Jonathan Kanter has signaled a disinclination to settle, as well as an eagerness to litigate large numbers of cases (toward that end, he has hired a huge number of litigators). One panelist noted that, given this posture from the DOJ, there is a risk that judges may come to believe that the department’s litigation decisions are not well-grounded in the law and the facts. The business community may also have a reduced willingness to “buy in” to DOJ guidance.

Panelists also expressed doubts about the wisdom of DOJ bringing more “criminal Sherman Act Section 2” cases. The Sherman Act is a criminal statute, but the “beyond a reasonable doubt” standard of criminal law and Due Process concerns may arise. Panelists also warned that, if new merger guidelines are ”unsound,” they may detract from the DOJ’s credibility in federal court.

    1. International antitrust developments have introduced costly new ex ante competition-regulation and enforcement-coordination problems.

As one panelist explained, the European Union’s implementation of the new Digital Markets Act (DMA) will harmfully undermine market forces. The DMA is a form of ex ante regulation—primarily applicable to large U.S. digital platforms—that will harmfully interject bureaucrats into network planning and design. The DMA will lead to inefficiencies, market fragmentation, and harm to consumers, and will inevitably have spillover effects outside Europe.

Even worse, the DMA will not displace the application of EU antitrust law, but merely add to its burdens. Regrettably, the DMA’s ex ante approach is being imitated by many other enforcement regimes, and the U.S. government tacitly supports it. The DMA has not been included in the U.S.-EU joint competition dialogue, which risks failure. Canada and the U.K. should also be added to the dialogue.

Other International Concerns

The international panelists also noted that there is an unfortunate lack of convergence on antitrust procedures. Furthermore, different jurisdictions manifest substantial inconsistencies in their approaches to multinational merger analysis, where better coordination is needed. There is a special problem in the areas of merger review and of criminal leniency for price fixers: when multiple jurisdictions need to “sign off” on an enforcement matter, the “most restrictive” jurisdiction has an effective veto.

Finally, former Assistant U.S. Attorney General for Antitrust James Rill—perhaps the most influential promoter of the adoption of sound antitrust laws worldwide—closed the international panel with a call for enhanced transnational cooperation. He highlighted the importance of global convergence on sound antitrust procedures, emphasizing due process. He also advocated bolstering International Competition Network (ICN) and OECD Competition Committee convergence initiatives, and explained that greater transparency in agency-enforcement actions is warranted. In that regard, Rill said, ICN nongovernmental advisers should be given a greater role.

Conclusion

Taken as a whole, the forum’s various presentations painted a rather gloomy picture of the short-term prospects for sound, empirically based, economics-centric antitrust enforcement.

In the United States, the enforcement agencies are committed to far more aggressive antitrust enforcement, particularly with respect to mergers. The agencies’ new approach downplays efficiencies and they will be quick to presume broad categories of business conduct are anticompetitive, relying far less closely on case-specific economic analysis.

The outlook is also bad overseas, as European Union enforcers are poised to implement new ex ante regulation of competition by large platforms as an addition to—not a substitute for—established burdensome antitrust enforcement. Most foreign jurisdictions appear to be following the European lead, and the U.S. agencies are doing nothing to discourage them. Indeed, they appear to fully support the European approach.

The consumer welfare standard, which until recently was the stated touchstone of American antitrust enforcement—and was given at least lip service in Europe—has more or less been set aside. The one saving grace in the United States is that the federal courts may put a halt to the agencies’ overweening ambitions, but that will take years. In the meantime, consumer welfare will suffer and welfare-enhancing business conduct will be disincentivized. The EU courts also may place a minor brake on European antitrust expansionism, but that is less certain.

Recall, however, that when evils flew out of Pandora’s box, hope remained. Let us hope, then, that the proverbial worm will turn, and that new leadership—inspired by hopeful and enlightened policy advocates—will restore principled antitrust grounded in the promotion of consumer welfare.

The blistering pace at which the European Union put forward and adopted the Digital Markets Act (DMA) has attracted the attention of legislators across the globe. In its wake, countries such as South Africa, India, Brazil, and Turkey have all contemplated digital-market regulations inspired by the DMA (and other models of regulation, such as the United Kingdom’s Digital Markets Unit and Australia’s sectoral codes of conduct).

Racing to be among the first jurisdictions to regulate might intuitively seem like a good idea. By emulating the EU, countries could hope to be perceived as on the cutting edge of competition policy, and hopefully earn a seat at the table when the future direction of such regulations is discussed.

There are, however, tradeoffs involved in regulating digital markets, which are arguably even more salient in the case of emerging markets. Indeed, as we will explain here, these jurisdictions often face challenges that significantly alter the ratio of costs and benefits when it comes to enacting regulation.

Drawing from a paper we wrote with Sam Bowman about competition policy in the Association of Southeast Asian Nations (ASEAN) zone, we highlight below three of the biggest issues these initiatives face.

To Regulate Competition, You First Need to Attract Competition

Perhaps the biggest factor cautioning emerging markets against adoption of DMA-inspired regulations is that such rules would impose heavy compliance costs to doing business in markets that are often anything but mature. It is probably fair to say that, in many (maybe most) emerging markets, the most pressing challenge is to attract investment from international tech firms in the first place, not how to regulate their conduct.

The most salient example comes from South Africa, which has sketched out plans to regulate digital markets. The Competition Commission has announced that Amazon, which is not yet available in the country, would fall under these new rules should it decide to enter—essentially on the presumption that Amazon would overthrow South Africa’s incumbent firms.

It goes without saying that, at the margin, such plans reduce either the likelihood that Amazon will enter the South African market at all, or the extent of its entry should it choose to do so. South African consumers thus risk losing the vast benefits such entry would bring—benefits that dwarf those from whatever marginal increase in competition might be gained from subjecting Amazon to onerous digital-market regulations.

While other tech firms—such as Alphabet, Meta, and Apple—are already active in most emerging jurisdictions, regulation might still have a similar deterrent effect to their further investment. Indeed, the infrastructure deployed by big tech firms in these jurisdictions is nowhere near as extensive as in Western countries. To put it mildly, emerging-market consumers typically only have access to slower versions of these firms’ services. A quick glimpse at Google Cloud’s global content-delivery network illustrates this point well (i.e., that there is far less infrastructure in developing markets):

Ultimately, emerging markets remain relatively underserved compared to those in the West. In such markets, the priority should be to attract tech investment, not to impose regulations that may further slow the deployment of critical internet infrastructure.

Growth Is Key

The potential to boost growth is the most persuasive argument for emerging markets to favor a more restrained approach to competition law and regulation, such as that currently employed in the United States.

Emerging nations may not have the means (or the inclination) to equip digital-market enforcers with resources similar to those of the European Commission. Given these resource constraints, it is essential that such jurisdictions focus their enforcement efforts on those areas that provide the highest return on investment, notably in terms of increased innovation.

This raises an important point. A recent empirical study by Ross Levine, Chen Lin, Lai Wei, and Wensi Xie finds that competition enforcement does, indeed, promote innovation. But among the study’s more surprising findings is that, unlike other areas of competition enforcement, the strength of a jurisdiction’s enforcement of “abuse of dominance” rules does not correlate with increased innovation. Furthermore, jurisdictions that allow for so-called “efficiency defenses” in unilateral-conduct cases also tend to produce more innovation. The authors thus conclude that:

From the perspective of maximizing patent-based innovation, therefore, a legal system that allows firms to exploit their dominant positions based on efficiency considerations could boost innovation.

These findings should give pause to policymakers who seek to emulate the European Union’s DMA—which, among other things, does not allow gatekeepers to put forward so-called “efficiency defenses” that would allow them to demonstrate that their behavior benefits consumers. If growth and innovation are harmed by overinclusive abuse-of-dominance regimes and rules that preclude firms from offering efficiency-based defenses, then this is probably even more true of digital-market regulations that replace case-by-case competition enforcement with per se prohibitions.

In short, the available evidence suggests that, faced with limited enforcement resources, emerging-market jurisdictions should prioritize other areas of competition policy, such as breaking up or mitigating the harmful effects of cartels and exercising appropriate merger controls.

These findings also cut in favor of emphasizing the traditional antitrust goal of maximizing consumer welfare—or, at least, protecting the competitive process. Many of the more recent digital-market regulations—such as the DMA, the UK DMU, and the ACCC sectoral codes of conduct—are instead focused on distributional issues. They seek to ensure that platform users earn a “fair share” of the benefits generated on a platform. In light of Levine et al.’s findings, this approach could be undesirable, as using competition policy to reduce monopoly rents may lead to less innovation.

In short, traditional antitrust law’s focus on consumer welfare and relatively limited enforcement in the area of unilateral conduct may be a good match for emerging nations that want competition regimes that maximize innovation under important resource constraints.

Consider Local Economic and Political Conditions

Emerging jurisdictions have diverse economic and political profiles. These features, in turn, affect the respective costs and benefits of digital-market regulations.

For example, digital-market regulations generally offer very broad discretion to competition enforcers. The DMA details dozens of open-ended prohibitions upon which enforcers can base infringement proceedings. Furthermore, because they are designed to make enforcers’ task easier, these regulations often remove protections traditionally afforded to defendants, such as appeals to the consumer welfare standard or efficiency defenses. The UK’s DMU initiative, for example, would lower the standard of proof that enforcers must meet.

Giving authorities broad powers with limited judicial oversight might be less problematic in jurisdictions where the state has a track record of self-restraint. The consequences of regulatory discretion might, however, be far more problematic in jurisdictions where authorities routinely overstep the mark and where the threat of corruption is very real.

To name but two, countries like South Africa and India rank relatively low in the World Bank’s “ease of doing business index” (84th and 62nd, respectively). They also rank relatively low on the Cato Institute’s “human freedom index” (77th and 119th, respectively—and both score particularly badly in terms of economic freedom). This suggests strongly that authorities in those jurisdictions are prone to misapply powers derived from digital-market regulations in ways that hurt growth and consumers.

To make matters worse, outright corruption is also a real problem in several emerging nations. Returning to South Africa and India, both jurisdictions face significant corruption issues (they rank 70th and 85th, respectively, on Transparency International’s “Corruption Perception Index”).

At a more granular level, an inquiry in South Africa revealed rampant corruption under former President Jacob Zuma, while current President Cyril Ramaphosa also faces significant corruption allegations. Writing in the Financial Times in 2018, Gaurav Dalmia—chair of Delhi-based Dalmia Group Holdings—opined that “India’s anti-corruption battle will take decades to win.”

This specter of corruption thus counsels in favor of establishing competition regimes with sufficient checks and balances, so as to prevent competition authorities from being captured by industry or political forces. But most digital-market regulations are designed precisely to remove those protections in order to streamline enforcement. The risk that they could be mobilized toward nefarious ends are thus anything but trivial. This is of particular concern, given that such regulations are typically mobilized against global firms in order to shield inefficient local firms—raising serious risks of protectionist enforcement that would harm local consumers.

Conclusion

The bottom line is that emerging markets would do well to reconsider the value of regulating digital markets that have yet to reach full maturity. Recent proposals threaten to deter tech investments in these jurisdictions, while raising significant risks of reduced growth, corruption, and consumer-harming protectionism.

[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

In a 3-2 July 2021 vote, the Federal Trade Commission (FTC) rescinded the nuanced statement it had issued in 2015 concerning the scope of unfair methods of competition under Section 5 of the FTC Act. At the same time, the FTC rejected the applicability of the balancing test set forth in the rule of reason (and with it, several decades of case law, agency guidance, and legal and economic scholarship).

The July 2021 statement not only rejected these long-established guiding principles for Section 5 enforcement but left in its place nothing but regulatory fiat. In the statement the FTC issued Nov. 10, 2022 (again, by a divided 3-1 vote), the agency has now adopted this “just trust us” approach as a permanent operating principle.

The November 2022 statement purports to provide a standard under which the agency will identify unfair methods of competition under Section 5. As Commissioner Christine Wilson explains in her dissent, however, it clearly fails to do so. Rather, it delivers a collection of vaguely described principles and pejorative rhetoric that encompass loosely defined harms to competition, competitors, workers and a catch-all group of “other market participants.”  

The methodology for identifying these harms is comparably vague. The agency not only again rejects the rule of reason but asserts the authority to take action against a variety of “non-quantifiable harms,” all of which can be addressed at the most “incipient” stages. Moreover, and perhaps most remarkably, the statement specifically rejects any form of “net efficiencies” or “numerical cost-benefit analysis” to guide its enforcement decisions or provide even a modicum of predictability to the business community.  

The November 2022 statement amounts to regulatory fiat on overdrive, presented with a thin veneer of legality derived from a medley of dormant judicial decisions, incomplete characterizations of precedent, and truncated descriptions of legislative history. Under the agency’s dubious understanding of Section 5, Congress in 1914 elected to provide the FTC with the authority to declare any business practice “unfair” subject to no principle other than the agency’s subjective understanding of that term (and, apparently, never to be informed by “numerical cost-benefit analysis”).

Moreover, any enforcement action that targeted a purportedly “unfair” practice would then be adjudicated within the agency and appealable in the first instance to the very same commissioners who authorized the action. This institutional hall of mirrors would establish the FTC as the national “fairness” arbiter subject to virtually no constraining principles under which the exercise of such powers could ever be deemed to have exceeded its scope. The license for abuse is obvious and the departure from due process inherent.

The views reflected in the November 2022 statement would almost certainly lead to a legal dead-end.  If the agency takes action under its idiosyncratic understanding of the scope of unfair methods of competition under Section 5, it would elicit a legal challenge that would likely lead to two possible outcomes, both being adverse to the agency. 

First, it is likely that a judge would reject the agency’s understanding of Section 5, since it is irreconcilable with a well-developed body of case law requiring that the FTC (just like any other administrative agency) act under principles that provide businesses with, as described by the 2nd U.S. Circuit Court of Appeals, at least “an inkling as to what they can lawfully do rather than be left in a state of complete unpredictability.”

Any legally defensible interpretation of the scope of unfair methods of competition under Section 5 must take into account not only legislative intent at the time the FTC Act was enacted but more than a century’s worth of case law that courts have developed to govern the actions of administrative powers. Contrary to suggestions made in the November 2022 statement, neither the statute nor the relevant body of case law mandates unqualified deference by courts to the presumed wisdom of expert regulators.

Second, even if a court accepted the agency’s interpretation of the statute (or did so provisionally), there is a strong likelihood that it would then be compelled to strike down Section 5 as an unconstitutional delegation of lawmaking powers from the legislative to the executive branch. Given the concern that a majority of the Supreme Court has increasingly expressed over actions by regulatory agencies—including the FTC, specifically, in AMG Capital Management LLC v. FTC (2021)and now again in the pending case, Axon Enterprise Inc. v. FTCthat do not clearly fall within the legislatively specified scope of an agency’s authority (as in the AMG decision and other recent Court decisions concerning the U.S. Securities and Exchange Commission, the Occupational Safety and Health Administration, the U.S. Environmental Protection Agency, and the United States Patent and Trademark Office), this would seem to be a high-probability outcome.

In short: any enforcement action taken under the agency’s newly expanded understanding of Section 5 is unlikely to withstand judicial scrutiny, either as a matter of statutory construction or as a matter of constitutional principle. Given this legal forecast, the November 2022 statement could be viewed as mere theatrics that is unlikely to have a long legal life or much practical impact (although, until judicial intervention, it could impose significant costs on firms that must defend against agency-enforcement actions brought under the unilaterally expanded scope of Section 5). 

Even if that were the case, however, the November 2022 statement and, in particular, its expanded understanding of the harms that the agency is purportedly empowered to target, is nonetheless significant because it should leave little doubt concerning the lack of any meaningful commitment by agency leadership to the FTC’s historical mission to preserve market competition. Rather, it has become increasingly clear that agency leadership seeks to deploy the powerful remedies of the FTC Act (and the rest of the antitrust-enforcement apparatus) to displace a market-driven economy governed by the free play of competitive forces with an administered economy in which regulators continuously intervene to reengineer economic outcomes on grounds of fairness to favored constituencies, rather than to preserve the competitive process.

Reengineering Section 5 of the FTC Act as a “shadow” antitrust statute that operates outside the rule of reason (or any other constraining objective principle) provides a strategic detour around the inconvenient evidentiary and other legal obstacles that the agency would struggle to overcome when seeking to achieve these policy objectives under the Sherman and Clayton Acts. This intentionally unstructured and inherently politicized approach to antitrust enforcement threatens not only the institutional preconditions for a market economy but ultimately the rule of law itself.

Slow wage growth and rising inequality over the past few decades have pushed economists more and more toward the study of monopsony power—particularly firms’ monopsony power over workers. Antitrust policy has taken notice. For example, when the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) initiated the process of updating their merger guidelines, their request for information included questions about how they should respond to monopsony concerns, as distinct from monopoly concerns. ​

From a pure economic-theory perspective, there is no important distinction between monopsony power and monopoly power. If Armen is trading his apples in exchange for Ben’s bananas, we can call Armen the seller of apples or the buyer of bananas. The labels (buyer and seller) are kind of arbitrary. It doesn’t matter as a pure theory matter. Monopsony and monopoly are just mirrored images.

Some infer from this monopoly-monopsony symmetry, however, that extending antitrust to monopsony power will be straightforward. As a practical matter for antitrust enforcement, it becomes less clear. The moment we go slightly less abstract and use the basic models that economists use, monopsony is not simply the mirror image of monopoly. The tools that antitrust economists use to identify market power differ in the two cases.

Monopsony Requires Studying Output

Suppose that the FTC and DOJ are considering a proposed merger. For simplicity, they know that the merger will generate efficiency gains (and they want to allow it) or market power (and they want to stop it) but not both. The challenge is to look at readily available data like prices and quantities to decide which it is. (Let’s ignore the ideal case that involves being able to estimate elasticities of demand and supply.)

In a monopoly case, if there are efficiency gains from a merger, the standard model has a clear prediction: the quantity sold in the output market will increase. An economist at the FTC or DOJ with sufficient data will be able to see (or estimate) the efficiencies directly in the output market. Efficiency gains result in either greater output at lower unit cost or else product-quality improvements that increase consumer demand. Since the merger lowers prices for consumers, the agencies (assume they care about the consumer welfare standard) will let the merger go through, since consumers are better off.

In contrast, if the merger simply enhances monopoly power without efficiency gains, the quantity sold will decrease, either because the merging parties raise prices or because quality declines. Again, the empirical implication of the merger is seen directly in the market in question. Since the merger raises prices for consumers, the agencies (assume they care about the consumer welfare standard) will let not the merger go through, since consumers are worse off. In both cases, you judge monopoly power by looking directly at the market that may or may not have monopoly power.

Unfortunately, the monopsony case is more complicated. Ultimately, we can be certain of the effects of monopsony only by looking at the output market, not the input market where the monopsony power is claimed.

To see why, consider again a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. An overly eager FTC may see a lower quantity of input purchased and jump to the conclusion that the merger increased monopsony power. After all, monopsonies purchase fewer inputs than competitive firms.

Not so fast. Fewer input purchases may be because of efficiency gains. For example, if the efficiency gain arises from the elimination of redundancies in a hospital merger, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies. This may even reduce the wages of technicians or the price of medical supplies, even if the newly merged hospitals are not exercising any market power to suppress wages.

The key point is that monopsony needs to be treated differently than monopoly. The antitrust agencies cannot simply look at the quantity of inputs purchased in the monopsony case as the flip side of the quantity sold in the monopoly case, because the efficiency-enhancing merger can look like the monopsony merger in terms of the level of inputs purchased.

How can the agencies differentiate efficiency-enhancing mergers from monopsony mergers? The easiest way may be for the agencies to look at the output market: an entirely different market than the one with the possibility of market power. Once we look at the output market, as we would do in a monopoly case, we have clear predictions. If the merger is efficiency-enhancing, there will be an increase in the output-market quantity. If the merger increases monopsony power, the firm perceives its marginal cost as higher than before the merger and will reduce output. 

In short, as we look for how to apply antitrust to monopsony-power cases, the agencies and courts cannot look to the input market to differentiate them from efficiency-enhancing mergers; they must look at the output market. It is impossible to discuss monopsony power coherently without considering the output market.

In real-world cases, mergers will not necessarily be either strictly efficiency-enhancing or strictly monopsony-generating, but a blend of the two. Any rigorous consideration of merger effects must account for both and make some tradeoff between them. The question of how guidelines should address monopsony power is inextricably tied to the consideration of merger efficiencies, particularly given the point above that identifying and evaluating monopsony power will often depend on its effects in downstream markets.

This is just one complication that arises when we move from the purest of pure theory to slightly more applied models of monopoly and monopsony power. Geoffrey Manne, Dirk Auer, Eric Fruits, Lazar Radic and I go through more of the complications in our comments summited to the FTC and DOJ on updating the merger guidelines.

What Assumptions Make the Difference Between Monopoly and Monopsony?

Now that we have shown that monopsony and monopoly are different, how do we square this with the initial observation that it was arbitrary whether we say Armen has monopsony power over apples or monopoly power over bananas?

There are two differences between the standard monopoly and monopsony models. First, in a vast majority of models of monopsony power, the agent with the monopsony power is buying goods only to use them in production. They have a “derived demand” for some factors of production. That demand ties their buying decision to an output market. For monopoly power, the firm sells the goods, makes some money, and that’s the end of the story.

The second difference is that the standard monopoly model looks at one output good at a time. The standard factor-demand model uses two inputs, which introduces a tradeoff between, say, capital and labor. We could force monopoly to look like monopsony by assuming the merging parties each produce two different outputs, apples and bananas. An efficiency gain could favor apple production and hurt banana consumers. While this sort of substitution among outputs is often realistic, it is not the standard economic way of modeling an output market.

The Jan. 18 Request for Information on Merger Enforcement (RFI)—issued jointly by the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ)—sets forth 91 sets of questions (subsumed under 15 headings) that provide ample opportunity for public comment on a large range of topics.

Before chasing down individual analytic rabbit holes related to specific questions, it would be useful to reflect on the “big picture” policy concerns raised by this exercise (but not hinted at in the questions). Viewed from a broad policy perspective, the RFI initiative risks undermining the general respect that courts have accorded merger guidelines over the years, as well as disincentivizing economically beneficial business consolidations.

Policy concerns that flow from various features of the RFI, which could undermine effective merger enforcement, are highlighted below. These concerns counsel against producing overly detailed guidelines that adopt a merger-skeptical orientation.

The RFI Reflects the False Premise that Competition is Declining in the United States

The FTC press release that accompanied the RFI’s release made clear that a supposed weakening of competition under the current merger-guidelines regime is a key driver of the FTC and DOJ interest in new guidelines:

Today, the Federal Trade Commission (FTC) and the Justice Department’s Antitrust Division launched a joint public inquiry aimed at strengthening enforcement against illegal mergers. Recent evidence indicates that many industries across the economy are becoming more concentrated and less competitive – imperiling choice and economic gains for consumers, workers, entrepreneurs, and small businesses.

This premise is not supported by the facts. Based on a detailed literature review, Chapter 6 of the 2020 Economic Report of the President concluded that “the argument that the U.S. economy is suffering from insufficient competition is built on a weak empirical foundation and questionable assumptions.” More specifically, the 2020 Economic Report explained:

Research purporting to document a pattern of increasing concentration and increasing markups uses data on segments of the economy that are far too broad to offer any insights about competition, either in specific markets or in the economy at large. Where data do accurately identify issues of concentration or supercompetitive profits, additional analysis is needed to distinguish between alternative explanations, rather than equating these market indicators with harmful market power.

Soon to-be-published quantitative research by Robert Kulick of NERA Economic Consulting and the American Enterprise Institute, presented at the Jan. 26 Mercatus Antitrust Forum, is consistent with the 2020 Economic Report’s findings. Kulick stressed that there was no general trend toward increasing industrial concentration in the U.S. economy from 2002 to 2017. In particular, industrial concentration has been declining since 2007; the Herfindahl–Hirschman index (HHI) for manufacturing has declined significantly since 2002; and the economywide four-firm concentration ratio (CR4) in 2017 was approximately the same as in 2002. 

Even in industries where concentration may have risen, “the evidence does not support claims that concentration is persistent or harmful.” In that regard, Kulick’s research finds that higher-concentration industries tend to become less concentrated, while lower-concentration industries tend to become more concentrated over time; increases in industrial concentration are associated with economic growth and job creation, particularly for high-growth industries; and rising industrial concentration may be driven by increasing market competition.

In short, the strongest justification for issuing new merger guidelines is based on false premises: an alleged decline in competition within the Unites States. Given this reality, the adoption of revised guidelines designed to “ratchet up” merger enforcement would appear highly questionable.

The RFI Strikes a Merger-Skeptical Tone Out of Touch with Modern Mainstream Antitrust Scholarship

The overall tone of the RFI reflects a skeptical view of the potential benefits of mergers. It ignores overarching beneficial aspects of mergers, which include reallocating scarce resources to higher-valued uses (through the market for corporate control) and realizing standard efficiencies of various sorts (including cost-based efficiencies and incentive effects, such as the elimination of double marginalization through vertical integration). Mergers also generate benefits by bringing together complementary assets and by generating synergies of various sorts, including the promotion of innovation and scaling up the fruits of research and development. (See here, for example.)

What’s more, as the Organisation for Economic Co-operation and Development (OECD) has explained, “[e]vidence suggests that vertical mergers are generally pro-competitive, as they are driven by efficiency-enhancing motives such as improving vertical co-ordination and realizing economies of scope.”

Given the manifold benefits of mergers in general, the negative and merger-skeptical tone of the RFI is regrettable. It not only ignores sound economics, but it is at odds with recent pronouncements by the FTC and DOJ. Notably, the 2010 DOJ-FTC Horizontal Merger Guidelines (issued by Obama administration enforcers) struck a neutral tone. Those guidelines recognized the duty to challenge anticompetitive mergers while noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (“[t]he Agencies seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral”). The same neutral approach is found in the 2020 DOJ-FTC Vertical Merger Guidelines (“the Agencies use a consistent set of facts and assumptions to evaluate both the potential competitive harm from a vertical merger and the potential benefits to competition”).

The RFI, however, expresses no concern about unnecessary government interference, and strongly emphasizes the potential shortcomings of the existing guidelines in questioning whether they “adequately equip enforcers to identify and proscribe unlawful, anticompetitive mergers.” Merger-skepticism is also reflected throughout the RFI’s 91 sets of questions. A close reading reveals that they are generally phrased in ways that implicitly assume competitive problems or reject potential merger justifications.

For example, the questions addressing efficiencies, under RFI heading 14, casts efficiencies in a generally negative light. Thus, the RFI asks whether “the [existing] guidelines’ approach to efficiencies [is] consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts,” citing the statement in FTC v. Procter & Gamble (1967) that “[p]ossible economies cannot be used as a defense to illegality.”

The view that antitrust disfavors mergers that enhance efficiencies (the “efficiencies offense”) has been roundly rejected by mainstream antitrust scholarship (see, for example, here, here, and here). It may be assumed that today’s Supreme Court (which has deemed consumer welfare to be the lodestone of antitrust enforcement since Reiter v. Sonotone (1979)) would give short shrift to an “efficiencies offense” justification for a merger challenge.

Another efficiencies-related question, under RFI heading 14.d, may in application fly in the face of sound market-oriented economics: “Where a merger is expected to generate cost savings via the elimination of ‘excess’ or ‘redundant’ capacity or workers, should the guidelines treat these savings as cognizable ‘efficiencies’?”

Consider a merger that generates synergies and thereby expands and/or raises the quality of goods and services produced with reduced capacity and fewer workers. This merger would allow these resources to be allocated to higher-valued uses elsewhere in the economy, yielding greater economic surplus for consumers and producers. But there is the risk that such a merger could be viewed unfavorably under new merger guidelines that were revised in light of this question. (Although heading 14.d includes a separate question regarding capacity reductions that have the potential to reduce supply resilience or product or service quality, it is not stated that this provision should be viewed as a limitation on the first sentence.)

The RFI’s discussion of topics other than efficiencies similarly sends the message that existing guidelines are too “pro-merger.” Thus, for example, under RFI heading 5 (“presumptions”), one finds the rhetorical question: “[d]o the [existing] guidelines adequately identify mergers that are presumptively unlawful under controlling case law?”

This question answers itself, by citing to the Philadelphia National Bank (1963) statement that “[w]ithout attempting to specify the smallest market share which would still be considered to threaten undue concentration, we are clear that 30% presents that threat.” This statement predates all of the merger guidelines and is out of step with the modern economic analysis of mergers, which the existing guidelines embody. It would, if taken seriously, threaten a huge number of proposed mergers that, until now, have not been subject to second-request review by the DOJ and FTC. As Judge Douglas Ginsburg and former Commissioner Joshua Wright have explained:

The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. . . . The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.

By inviting support for PNB-style thinking, RFI heading 5’s lead question effectively rejects the economic effects-based analysis that has been central to agency merger analysis for decades. Guideline revisions that downplay effects in favor of mere concentration would likely be viewed askance by reviewing courts (and almost certainly would be rejected by the Supreme Court, as currently constituted, if the occasion arose).

These particularly striking examples are illustrative of the questioning tone regarding existing merger analysis that permeates the RFI.

New Merger Guidelines, if Issued, Should Not Incorporate the Multiplicity of Issues Embodied in the RFI

The 91 sets of questions in the RFI read, in large part, like a compendium of theoretical harms to the working of markets that might be associated with mergers. While these questions may be of general academic interest, and may shed some light on particular merger investigations, most of them should not be incorporated into guidelines.

As Justice Stephen Breyer has pointed out, antitrust is a legal regime that must account for administrative practicalities. Then-Judge Breyer described the nature of the problem in his 1983 Barry Wright opinion (affirming the dismissal of a Sherman Act Section 2 complaint based on “unreasonably low” prices):

[W]hile technical economic discussion helps to inform the antitrust laws, those laws cannot precisely replicate the economists’ (sometimes conflicting) views. For, unlike economics, law is an administrative system the effects of which depend upon the content of rules and precedents only as they are applied by judges and juries in courts and by lawyers advising their clients. Rules that seek to embody every economic complexity and qualification may well, through the vagaries of administration, prove counter-productive, undercutting the very economic ends they seek to serve.

It follows that any effort to include every theoretical merger-related concern in new merger guidelines would undercut their (presumed) overarching purpose, which is providing useful guidance to the private sector. All-inclusive “guidelines” in reality provide no guidance at all. Faced with a laundry list of possible problems that might prompt the FTC or DOJ to oppose a merger, private parties would face enormous uncertainty, which could deter them from proposing a large number of procompetitive, welfare-enhancing or welfare-neutral consolidations. This would “undercut the very economic ends” of promoting competition that is served by Section 7 enforcement.

Furthermore, all-inclusive merger guidelines could be seen by judges as undermining the rule of law (see here, for example). If DOJ and FTC were able to “pick and choose” at will from an enormously wide array of considerations to justify opposing a proposed merger, they could be seen as engaged in arbitrary enforcement, rather than in a careful weighing of evidence aimed at condemning only anticompetitive transactions. This would be at odds with the promise of fair and dispassionate enforcement found in the 2010 Horizontal Merger Guidelines, namely, to “seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral.”

Up until now, federal courts have virtually always implicitly deferred to (and not questioned) the application of merger-guideline principles by the DOJ and FTC. The agencies have won or lost cases based on courts’ weighing of particular factual and economic evidence, not on whether guideline principles should have been applied by the enforcers.

One would expect courts to react very differently, however, to cases brought in light of ridiculously detailed “guidelines” that did not provide true guidance (particularly if they were heavy on competitive harm possibilities and discounted efficiencies). The agencies’ selective reliance on particular anticompetitive theories could be seen as exercises in arbitrary “pre-cooked” condemnations, not dispassionate enforcement. As such, the courts would tend to be far more inclined to reject (or accord far less deference to) the new guidelines in evaluating agency merger challenges. Even transactions that would have been particularly compelling candidates for condemnation under prior guidelines could be harder to challenge successfully, due to the taint of the new guidelines.

In short, the adoption of highly detailed guidelines that emphasize numerous theories of harm would likely undermine the effectiveness of DOJ and FTC merger enforcement, the precise opposite of what the agencies would have intended.

New Merger Guidelines, if Issued, Should Avoid Relying on Outdated Case Law and Novel Section 7 Theories, and Should Give Due Credit to Economic Efficiencies

The DOJ and FTC could, of course, acknowledge the problem of administrability  and issue more straightforward guideline revisions, of comparable length and detail to prior guidelines. If they choose to do so, they would be well-advised to eschew relying on dated precedents and novel Section 7 theories. They should also give due credit to efficiencies. Seemingly biased guidelines would undermine merger enforcement, not strengthen it.

As discussed above, the RFI’s implicitly favorable references to Philadelphia National Bank and Procter & Gamble are at odds with contemporary economics-based antitrust thinking, which has been accepted by the federal courts. The favorable treatment of those antediluvian holdings, and Brown Shoe Co. v. United States (1962) (another horribly dated case cited multiple times in the RFI), would do much to discredit new guidelines.

In that regard, the suggestion in RFI heading 1 that existing merger guidelines may not “faithfully track the statutory text, legislative history, and established case law around merger enforcement” touts the Brown Shoe and PNB concerns with a “trend toward concentration” and “the danger of subverting congressional intent by permitting a too-broad economic investigation.”

New guidelines that focus on (or even give lip service to) a “trend” toward concentration and eschew overly detailed economic analyses (as opposed, perhaps, to purely concentration-based negative rules of thumb?) would predictably come in for judicial scorn as economically unfounded. Such references would do as much (if not more) to ensure judicial rejection of enforcement-agency guidelines as endless lists of theoretically possible sources of competitive harm, discussed previously.

Of particular concern are those references that implicitly reject the need to consider efficiencies, which is key to modern enlightened merger evaluations. It is ludicrous to believe that a majority of the current Supreme Court would have a merger-analysis epiphany and decide that the RFI’s preferred interventionist reading of Section 7 statutory language and legislative history trumps decades of economically centered consumer-welfare scholarship and agency guidelines.

Herbert Hovenkamp, author of the leading American antitrust treatise and a scholar who has been cited countless times by the Supreme Court, recently put it well (in an article coauthored with Carl Shapiro):

When the FTC investigates vertical and horizontal mergers will it now take the position that efficiencies are irrelevant, even if they are proven? If so, the FTC will face embarrassing losses in court.

Reviewing courts wound no doubt take heed of this statement in assessing any future merger guidelines that rely on dated and discredited cases or that minimize efficiencies.

New Guidelines, if Issued, Should Give Due Credit to Efficiencies

Heading 14 of the RFI—listing seven sets of questions that deal with efficiencies—is in line with the document’s implicitly negative portrayal of mergers. The heading begins inauspiciously, with a question that cites Procter & Gamble in suggesting that the current guidelines’ approach to efficiencies is “[in]consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts.” As explained above, such an anti-efficiencies reference would be viewed askance by most, if not all, reviewing judges.

Other queries in heading 14 also view efficiencies as problematic. They suggest that efficiency claims should be treated negatively because efficiency claims are not always realized after the fact. But merger activity is a private-sector search process, and the ability to predict ex post effects with perfect accuracy is an inevitable part of market activity. Using such a natural aspect of markets as an excuse to ignore efficiencies would prevent many economically desirable consolidations from being achieved.

Furthermore, the suggestion under heading 14 that parties should have to show with certainty that cognizable efficiencies could not have been achieved through alternative means asks the impossible. Theoreticians may be able to dream up alternative means by which efficiencies might have been achieved (say, through convoluted contracts), but such constructs may not be practical in real-world settings. Requiring businesses to follow dubious theoretical approaches to achieve legitimate business ends, rather than allowing them to enter into arrangements they favor that appear efficient, would manifest inappropriate government interference in markets. (It would be just another example of the “pretense of knowledge” that Friedrich Hayek brilliantly described in his 1974 Nobel Prize lecture.)

Other questions under heading 14 raise concerns about the lack of discussion of possible “inefficiencies” in current guidelines, and speculate about possible losses of “product or service quality” due to otherwise efficient reductions in physical capacity and employment. Such theoretical musings offer little guidance to the private sector, and further cast in a negative light potential real resource savings.

Rather than incorporate the unhelpful theoretical efficiencies critiques under heading 14, the agencies should consider a more helpful approach to clarifying the evaluation of efficiencies in new guidelines. Such a clarification could be based on Commissioner Christine Wilson’s helpful discussion of merger efficiencies in recent writings (see, for example, here and here). Wilson has appropriately called for the symmetric treatment of both the potential harms and benefits arising from mergers, explaining that “the agencies readily credit harms but consistently approach potential benefits with extreme skepticism.”

She and Joshua Wright have also explained (see here, here, and here) that overly narrow product-market definitions may sometimes preclude consideration of substantial “out-of-market” efficiencies that arise from certain mergers. The consideration of offsetting “out-of-market” efficiencies that greatly outweigh competitive harms might warrant inclusion in new guidelines.

The FTC and DOJ could be heading for a merger-enforcement train wreck if they adopt new guidelines that incorporate the merger-skeptical tone and excruciating level of detail found in the RFI. This approach would yield a lengthy and uninformative laundry list of potential competitive problems that would allow the agencies to selectively pick competitive harm “stories” best adapted to oppose particular mergers, in tension with the rule of law.

Far from “strengthening” merger enforcement, such new guidelines would lead to economically harmful business uncertainty and would severely undermine judicial respect for the federal merger-enforcement process. The end result would be a “lose-lose” for businesses, for enforcers, and for the American economy.

Conclusion

If the agencies enact new guidelines, they should be relatively short and straightforward, designed to give private parties the clearest possible picture of general agency enforcement intentions. In particular, new guidelines should:

  1. Eschew references to dated and discredited case law;
  2. Adopt a neutral tone that acknowledges the beneficial aspects of mergers;
  3. Recognize the duty to challenge anticompetitive mergers, while at the same time noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (consistent with the 2010 Horizontal Merger Guidelines); and
  4. Acknowledge the importance of efficiencies, treating them symmetrically with competitive harm and according appropriate weight to countervailing out-of-market efficiencies (a distinct improvement over existing enforcement policy).

Merger enforcement should continue to be based on fact-based case-specific evaluations, informed by sound economics. Populist nostrums that treat mergers with suspicion and that ignore their beneficial aspects should be rejected. Such ideas are at odds with current scholarly thinking and judicial analysis, and should be relegated to the scrap heap of outmoded and bad public policies.

Recent antitrust forays on both sides of the Atlantic have unfortunate echoes of the oldie-but-baddie “efficiencies offense” that once plagued American and European merger analysis (and, more broadly, reflected a “big is bad” theory of antitrust). After a very short overview of the history of merger efficiencies analysis under American and European competition law, we briefly examine two current enforcement matters “on both sides of the pond” that impliedly give rise to such a concern. Those cases may regrettably foreshadow a move by enforcers to downplay the importance of efficiencies, if not openly reject them.

Background: The Grudging Acceptance of Merger Efficiencies

Not long ago, economically literate antitrust teachers in the United States enjoyed poking fun at such benighted 1960s Supreme Court decisions as Procter & Gamble (following in the wake of Brown Shoe andPhiladelphia National Bank). Those holdings—which not only rejected efficiencies justifications for mergers, but indeed “treated efficiencies more as an offense”—seemed a thing of the past, put to rest by the rise of an economic approach to antitrust. Several early European Commission merger-control decisions also arguably embraced an “efficiencies offense.”  

Starting in the 1980s, the promulgation of increasingly economically sophisticated merger guidelines in the United States led to the acceptance of efficiencies (albeit less then perfectly) as an important aspect of integrated merger analysis. Several practitioners have claimed, nevertheless, that “efficiencies are seldom credited and almost never influence the outcome of mergers that are otherwise deemed anticompetitive.” Commissioner Christine Wilson has argued that the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) still have work to do in “establish[ing] clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.”

In its first few years of merger review, which was authorized in 1989, the European Commission was hostile to merger-efficiency arguments.  In 2004, however, the EC promulgated horizontal merger guidelines that allow for the consideration of efficiencies, but only if three cumulative conditions (consumer benefit, merger specificity, and verifiability) are satisfied. A leading European competition practitioner has characterized several key European Commission merger decisions in the last decade as giving rather short shrift to efficiencies. In light of that observation, the practitioner has advocated that “the efficiency offence theory should, once again, be repudiated by the Commission, in order to avoid deterring notifying parties from bringing forward perfectly valid efficiency claims.”

In short, although the actual weight enforcers accord to efficiency claims is a matter of debate, efficiency justifications are cognizable, subject to constraints, as a matter of U.S. and European Union merger-enforcement policy. Whether that will remain the case is, unfortunately, uncertain, given DOJ and FTC plans to revise merger guidelines, as well as EU talk of convergence with U.S. competition law.

Two Enforcement Matters with ‘Efficiencies Offense’ Overtones

Two Facebook-related matters currently before competition enforcers—one in the United States and one in the United Kingdom—have implications for the possible revival of an antitrust “efficiencies offense” as a “respectable” element of antitrust policy. (I use the term Facebook to reference both the platform company and its corporate parent, Meta.)

FTC v. Facebook

The FTC’s 2020 federal district court monopolization complaint against Facebook, still in the motion to dismiss the amended complaint phase (see here for an overview of the initial complaint and the judge’s dismissal of it), rests substantially on claims that Facebook’s acquisitions of Instagram and WhatsApp harmed competition. As Facebook points out in its recent reply brief supporting its motion to dismiss the FTC’s amended complaint, Facebook appears to be touting merger-related efficiencies in critiquing those acquisitions. Specifically:

[The amended complaint] depends on the allegation that Facebook’s expansion of both Instagram and WhatsApp created a “protective ‘moat’” that made it harder for rivals to compete because Facebook operated these services at “scale” and made them attractive to consumers post-acquisition. . . . The FTC does not allege facts that, left on their own, Instagram and WhatsApp would be less expensive (both are free; Facebook made WhatsApp free); or that output would have been greater (their dramatic expansion at “scale” is the linchpin of the FTC’s “moat” theory); or that the products would be better in any specific way.

The FTC’s concerns about a scale-based merger-related output expansion that benefited consumers and thereby allegedly enhanced Facebook’s market position eerily echoes the commission’s concerns in Procter & Gamble that merger-related cost-reducing joint efficiencies in advertising had an anticompetitive “entrenchment” effect. Both positions, in essence, characterize output-increasing efficiencies as harmful to competition: in other words, as “efficiencies offenses.”

UK Competition and Markets Authority (CMA) v. Facebook

The CMA announced Dec. 1 that it had decided to block retrospectively Facebook’s 2020 acquisition of Giphy, which is “a company that provides social media and messaging platforms with animated GIF images that users can embed in posts and messages. . . .  These platforms license the use of Giphy for its users.”

The CMA theorized that Facebook could harm competition by (1) restricting access to Giphy’s digital libraries to Facebook’s competitors; and (2) prevent Giphy from developing into a potential competitor to Facebook’s display advertising business.

As a CapX analysis explains, the CMA’s theory of harm to competition, based on theoretical speculation, is problematic. First, a behavioral remedy short of divestiture, such as requiring Facebook to maintain open access to its gif libraries, would deal with the threat of restricted access. Indeed, Facebook promised at the time of the acquisition that Giphy would maintain its library and make it widely available. Second, “loss of a single, relatively small, potential competitor out of many cannot be counted as a significant loss for competition, since so many other potential and actual competitors remain.” Third, given the purely theoretical and questionable danger to future competition, the CMA “has blocked this deal on relatively speculative potential competition grounds.”

Apart from the weakness of the CMA’s case for harm to competition, the CMA appears to ignore a substantial potential dynamic integrative efficiency flowing from Facebook’s acquisition of Giphy. As David Teece explains:

Facebook’s acquisition of Giphy maintained Giphy’s assets and furthered its innovation in Facebook’s ecosystem, strengthening that ecosystem in competition with others; and via Giphy’s APIs, strengthening the ecosystems of other service providers as well.

There is no evidence that CMA seriously took account of this integrative efficiency, which benefits consumers by offering them a richer experience from Facebook and its subsidiary Instagram, and which spurs competing ecosystems to enhance their offerings to consumers as well. This is a failure to properly account for an efficiency. Moreover, to the extent that the CMA viewed these integrative benefits as somehow anticompetitive (to the extent that it enhanced Facebook’s competitive position) the improvement of Facebook’s ecosystem could have been deemed a type of “efficiencies offense.”

Are the Facebook Cases Merely Random Straws in the Wind?

It might appear at first blush to be reading too much into the apparent slighting of efficiencies in the two current Facebook cases. Nevertheless, recent policy rhetoric suggests that economic efficiencies arguments (whose status was tenuous at enforcement agencies to begin with) may actually be viewed as “offensive” by the new breed of enforcers.

In her Sept. 22 policy statement on “Vision and Priorities for the FTC,” Chair Lina Khan advocated focusing on the possible competitive harm flowing from actions of “gatekeepers and dominant middlemen,” and from “one-sided [vertical] contract provisions” that are “imposed by dominant firms.” No suggestion can be found in the statement that such vertical relationships often confer substantial benefits on consumers. This hints at a new campaign by the FTC against vertical restraints (as opposed to an emphasis on clearly welfare-inimical conduct) that could discourage a wide range of efficiency-producing contracts.

Chair Khan also sponsored the FTC’s July 2021 rescission of its Section 5 Policy Statement on Unfair Methods of Competition, which had emphasized the primacy of consumer welfare as the guiding principle underlying FTC antitrust enforcement. A willingness to set aside (or place a lower priority on) consumer welfare considerations suggests a readiness to ignore efficiency justifications that benefit consumers.

Even more troubling, a direct attack on the consideration of efficiencies is found in the statement accompanying the FTC’s September 2021 withdrawal of the 2020 Vertical Merger Guidelines:

The statement by the FTC majority . . . notes that the 2020 Vertical Merger Guidelines had improperly contravened the Clayton Act’s language with its approach to efficiencies, which are not recognized by the statute as a defense to an unlawful merger. The majority statement explains that the guidelines adopted a particularly flawed economic theory regarding purported pro-competitive benefits of mergers, despite having no basis of support in the law or market reality.

Also noteworthy is Khan’s seeming interest (found in her writings here, here, and here) in reviving Robinson-Patman Act enforcement. What’s worse, President Joe Biden’s July 2021 Executive Order on Competition explicitly endorses FTC investigation of “retailers’ practices on the conditions of competition in the food industries, including any practices that may violate [the] Robinson-Patman Act” (emphasis added). Those troubling statements from the administration ignore the widespread scholarly disdain for Robinson-Patman, which is almost unanimously viewed as an attack on efficiencies in distribution. For example, in recommending the act’s repeal in 2007, the congressionally established Antitrust Modernization Commission stressed that the act “protects competitors against competition and punishes the very price discounting and innovation and distribution methods that the antitrust otherwise encourage.”

Finally, newly confirmed Assistant Attorney General for Antitrust Jonathan Kanter (who is widely known as a Big Tech critic) has expressed his concerns about the consumer welfare standard and the emphasis on economics in antitrust analysis. Such concerns also suggest, at least by implication, that the Antitrust Division under Kanter’s leadership may manifest a heightened skepticism toward efficiencies justifications.

Conclusion

Recent straws in the wind suggest that an anti-efficiencies hay pile is in the works. Although antitrust agencies have not yet officially rejected the consideration of efficiencies, nor endorsed an “efficiencies offense,” the signs are troubling. Newly minted agency leaders’ skepticism toward antitrust economics, combined with their de-emphasis of the consumer welfare standard and efficiencies (at least in the merger context), suggest that even strongly grounded efficiency explanations may be summarily rejected at the agency level. In foreign jurisdictions, where efficiencies are even less well-established, and enforcement based on mere theory (as opposed to empiricism) is more widely accepted, the outlook for efficiencies stories appears to be no better.     

One powerful factor, however, should continue to constrain the anti-efficiencies movement, at least in the United States: the federal courts. As demonstrated most recently in the 9th U.S. Circuit Court of Appeals’ FTC v. Qualcomm decision, American courts remain committed to insisting on empirical support for theories of harm and on seriously considering business justifications for allegedly suspect contractual provisions. (The role of foreign courts in curbing prosecutorial excesses not grounded in economics, and in weighing efficiencies, depends upon the jurisdiction, but in general such courts are far less of a constraint on enforcers than American tribunals.)

While the DOJ and FTC (and, perhaps to a lesser extent, foreign enforcers) will have to keep the judiciary in mind in deciding to bring enforcement actions, the denigration of efficiencies by the agencies still will have an unfortunate demonstration effect on the private sector. Given the cost (both in resources and in reputational capital) associated with antitrust investigations, and the inevitable discounting for the risk of projects caught up in such inquiries, a publicly proclaimed anti-efficiencies enforcement philosophy will do damage. On the margin, it will lead businesses to introduce fewer efficiency-seeking improvements that could be (wrongly) characterized as “strengthening” or “entrenching” market dominance. Such business decisions, in turn, will be welfare-inimical; they will deny consumers the benefit of efficiencies-driven product and service enhancements, and slow the rate of business innovation.

As such, it is to be hoped that, upon further reflection, U.S. and foreign competition enforcers will see the light and publicly proclaim that they will fully weigh efficiencies in analyzing business conduct. The “efficiencies offense” was a lousy tune. That “oldie-but-baddie” should not be replayed.

The American Choice and Innovation Online Act (previously called the Platform Anti-Monopoly Act), introduced earlier this summer by U.S. Rep. David Cicilline (D-R.I.), would significantly change the nature of digital platforms and, with them, the internet itself. Taken together, the bill’s provisions would turn platforms into passive intermediaries, undermining many of the features that make them valuable to consumers. This seems likely to remain the case even after potential revisions intended to minimize the bill’s unintended consequences.

In its current form, the bill is split into two parts that each is dangerous in its own right. The first, Section 2(a), would prohibit almost any kind of “discrimination” by platforms. Because it is so open-ended, lawmakers might end up removing it in favor of the nominally more focused provisions of Section 2(b), which prohibit certain named conduct. But despite being more specific, this section of the bill is incredibly far-reaching and would effectively ban swaths of essential services.

I will address the potential effects of these sections point-by-point, but both elements of the bill suffer from the same problem: a misguided assumption that “discrimination” by platforms is necessarily bad from a competition and consumer welfare point of view. On the contrary, this conduct is often exactly what consumers want from platforms, since it helps to bring order and legibility to otherwise-unwieldy parts of the Internet. Prohibiting it, as both main parts of the bill do, would make the Internet harder to use and less competitive.

Section 2(a)

Section 2(a) essentially prohibits any behavior by a covered platform that would advantage that platform’s services over any others that also uses that platform; it characterizes this preferencing as “discrimination.”

As we wrote when the House Judiciary Committee’s antitrust bills were first announced, this prohibition on “discrimination” is so broad that, if it made it into law, it would prevent platforms from excluding or disadvantaging any product of another business that uses the platform or advantaging their own products over those of their competitors.

The underlying assumption here is that platforms should be like telephone networks: providing a way for different sides of a market to communicate with each other, but doing little more than that. When platforms do do more—for example, manipulating search results to favor certain businesses or to give their own products prominence —it is seen as exploitative “leveraging.”

But consumers often want platforms to be more than just a telephone network or directory, because digital markets would be very difficult to navigate without some degree of “discrimination” between sellers. The Internet is so vast and sellers are often so anonymous that any assistance which helps you choose among options can serve to make it more navigable. As John Gruber put it:

From what I’ve seen over the last few decades, the quality of the user experience of every computing platform is directly correlated to the amount of control exerted by its platform owner. The current state of the ownerless world wide web speaks for itself.

Sometimes, this manifests itself as “self-preferencing” of another service, to reduce additional time spent searching for the information you want. When you search for a restaurant on Google, it can be very useful to get information like user reviews, the restaurant’s phone number, a button on mobile to phone them directly, estimates of how busy it is, and a link to a Maps page to see how to actually get there.

This is, undoubtedly, frustrating for competitors like Yelp, who would like this information not to be there and for users to have to click on either a link to Yelp or a link to Google Maps. But whether it is good or bad for Yelp isn’t relevant to whether it is good for users—and it is at least arguable that it is, which makes a blanket prohibition on this kind of behavior almost inevitably harmful.

If it isn’t obvious why removing this kind of feature would be harmful for users, ask yourself why some users search in Yelp’s app directly for this kind of result. The answer, I think, is that Yelp gives you all the information above that Google does (and sometimes is better, although I tend to trust Google Maps’ reviews over Yelp’s), and it’s really convenient to have all that on the same page. If Google could not provide this kind of “rich” result, many users would probably stop using Google Search to look for restaurant information in the first place, because a new friction would have been added that made the experience meaningfully worse. Removing that option would be good for Yelp, but mainly because it removes a competitor.

If all this feels like stating the obvious, then it should highlight a significant problem with Section 2(a) in the Cicilline bill: it prohibits conduct that is directly value-adding for consumers, and that creates competition for dedicated services like Yelp that object to having to compete with this kind of conduct.

This is true across all the platforms the legislation proposes to regulate. Amazon prioritizes some third-party products over others on the basis of user reviews, rates of returns and complaints, and so on; Amazon provides private label products to fill gaps in certain product lines where existing offerings are expensive or unreliable; Apple pre-installs a Camera app on the iPhone that, obviously, enjoys an advantage over rival apps like Halide.

Some or all of this behavior would be prohibited under Section 2(a) of the Cicilline bill. Combined with the bill’s presumption that conduct must be defended affirmatively—that is, the platform is presumed guilty unless it can prove that the challenged conduct is pro-competitive, which may be very difficult to do—and the bill could prospectively eliminate a huge range of socially valuable behavior.

Supporters of the bill have already been left arguing that the law simply wouldn’t be enforced in these cases of benign discrimination. But this would hardly be an improvement. It would mean the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) have tremendous control over how these platforms are built, since they could challenge conduct in virtually any case. The regulatory uncertainty alone would complicate the calculus for these firms as they refine, develop, and deploy new products and capabilities. 

So one potential compromise might be to do away with this broad-based rule and proscribe specific kinds of “discriminatory” conduct instead. This approach would involve removing Section 2(a) from the bill but retaining Section 2(b), which enumerates 10 practices it deems to be “other discriminatory conduct.” This may seem appealing, as it would potentially avoid the worst abuses of the broad-based prohibition. In practice, however, it would carry many of the same problems. In fact, many of 2(b)’s provisions appear to go even further than 2(a), and would proscribe even more procompetitive conduct that consumers want.

Sections 2(b)(1) and 2(b)(9)

The wording of these provisions is extremely broad and, as drafted, would seem to challenge even the existence of vertically integrated products. As such, these prohibitions are potentially even more extensive and invasive than Section 2(a) would have been. Even a narrower reading here would seem to preclude safety and privacy features that are valuable to many users. iOS’s sandboxing of apps, for example, serves to limit the damage that a malware app can do on a user’s device precisely because of the limitations it imposes on what other features and hardware the app can access.

Section 2(b)(2)

This provision would preclude a firm from conditioning preferred status on use of another service from that firm. This would likely undermine the purpose of platforms, which is to absorb and counter some of the risks involved in doing business online. An example of this is Amazon’s tying eligibility for its Prime program to sellers that use Amazon’s delivery service (FBA – Fulfilled By Amazon). The bill seems to presume in an example like this that Amazon is leveraging its power in the market—in the form of the value of the Prime label—to profit from delivery. But Amazon could, and already does, charge directly for listing positions; it’s unclear why it would benefit from charging via FBA when it could just charge for the Prime label.

An alternate, simpler explanation is that FBA improves the quality of the service, by granting customers greater assurance that a Prime product will arrive when Amazon says it will. Platforms add value by setting out rules and providing services that reduce the uncertainties between buyers and sellers they’d otherwise experience if they transacted directly with each other. This section’s prohibition—which, as written, would seem to prevent any kind of quality assurance—likely would bar labelling by a platform, even where customers explicitly want it.

Section 2(b)(3)

As written, this would prohibit platforms from using aggregated data to improve their services at all. If Apple found that 99% of its users uninstalled an app immediately after it was installed, it would be reasonable to conclude that the app may be harmful or broken in some way, and that Apple should investigate. This provision would ban that.

Sections 2(b)(4) and 2(b)(6)

These two provisions effectively prohibit a platform from using information it does not also provide to sellers. Such prohibitions ignore the fact that it is often good for sellers to lack certain information, since withholding information can prevent abuse by malicious users. For example, a seller may sometimes try to bribe their customers to post positive reviews of their products, or even threaten customers who have posted negative ones. Part of the role of a platform is to combat that kind of behavior by acting as a middleman and forcing both consumer users and business users to comply with the platform’s own mechanisms to control that kind of behavior.

If this seems overly generous to platforms—since, obviously, it gives them a lot of leverage over business users—ask yourself why people use platforms at all. It is not a coincidence that people often prefer Amazon to dealing with third-party merchants and having to navigate those merchants’ sites themselves. The assurance that Amazon provides is extremely valuable for users. Much of it comes from the company’s ability to act as a middleman in this way, lowering the transaction costs between buyers and sellers.

Section 2(b)(5)

This provision restricts the treatment of defaults. It is, however, relatively restrained when compared to, for example, the DOJ’s lawsuit against Google, which treats as anticompetitive even payment for defaults that can be changed. Still, many of the arguments that apply in that case also apply here: default status for apps can be a way to recoup income foregone elsewhere (e.g., a browser provided for free that makes its money by selling the right to be the default search engine).

Section 2(b)(7)

This section gets to the heart of why “discrimination” can often be procompetitive: that it facilitates competition between platforms. The kind of self-preferencing that this provision would prohibit can allow firms that have a presence in one market to extend that position into another, increasing competition in the process. Both Apple and Amazon have used their customer bases in smartphones and e-commerce, respectively, to grow their customer bases for video streaming, in competition with Netflix, Google’s YouTube, cable television, and each other. If Apple designed a search engine to compete with Google, it would do exactly the same thing, and we would be better off because of it. Restricting this kind of behavior is, perversely, exactly what you would do if you wanted to shield these incumbents from competition.

Section 2(b)(8)

As with other provisions, this one would preclude one of the mechanisms by which platforms add value: creating assurance for customers about the products they can expect if they visit the platform. Some of this relates to child protection; some of the most frustrating stories involve children being overcharged when they use an iPhone or Android app, and effectively being ripped off because of poor policing of the app (or insufficiently strict pricing rules by Apple or Google). This may also relate to rules that state that the seller cannot offer a cheaper product elsewhere (Amazon’s “General Pricing Rule” does this, for example). Prohibiting this would simply impose a tax on customers who cannot shop around and would prefer to use a platform that they trust has the lowest prices for the item they want.

Section 2(b)(10)

Ostensibly a “whistleblower” provision, this section could leave platforms with no recourse, not even removing a user from its platform, in response to spurious complaints intended purely to extract value for the complaining business rather than to promote competition. On its own, this sort of provision may be fairly harmless, but combined with the provisions above, it allows the bill to add up to a rent-seekers’ charter.

Conclusion

In each case above, it’s vital to remember that a reversed burden of proof applies. So, there is a high chance that the law will side against the defendant business, and a large downside for conduct that ends up being found to violate these provisions. That means that platforms will likely err on the side of caution in many cases, avoiding conduct that is ambiguous, and society will probably lose a lot of beneficial behavior in the process.

Put together, the provisions undermine much of what has become an Internet platform’s role: to act as an intermediary, de-risk transactions between customers and merchants who don’t know each other, and tweak the rules of the market to maximize its attractiveness as a place to do business. The “discrimination” that the bill would outlaw is, in practice, behavior that makes it easier for consumers to navigate marketplaces of extreme complexity and uncertainty, in which they often know little or nothing about the firms with whom they are trying to transact business.

Customers do not want platforms to be neutral, open utilities. They can choose platforms that are like that already, such as eBay. They generally tend to prefer ones like Amazon, which are not neutral and which carefully cultivate their service to be as streamlined, managed, and “discriminatory” as possible. Indeed, many of people’s biggest complaints with digital platforms relate to their openness: the fake reviews, counterfeit products, malware, and spam that come with letting more unknown businesses use your service. While these may be unavoidable by-products of running a platform, platforms compete on their ability to ferret them out. Customers are unlikely to thank legislators for regulating Amazon into being another eBay.

The language of the federal antitrust laws is extremely general. Over more than a century, the federal courts have applied common-law techniques to construe this general language to provide guidance to the private sector as to what does or does not run afoul of the law. The interpretive process has been fraught with some uncertainty, as judicial approaches to antitrust analysis have changed several times over the past century. Nevertheless, until very recently, judges and enforcers had converged toward relying on a consumer welfare standard as the touchstone for antitrust evaluations (see my antitrust primer here, for an overview).

While imperfect and subject to potential error in application—a problem of legal interpretation generally—the consumer welfare principle has worked rather well as the focus both for antitrust-enforcement guidance and judicial decision-making. The general stability and predictability of antitrust under a consumer welfare framework has advanced the rule of law. It has given businesses sufficient information to plan transactions in a manner likely to avoid antitrust liability. It thereby has cabined uncertainty and increased the probability that private parties would enter welfare-enhancing commercial arrangements, to the benefit of society.

In a very thoughtful 2017 speech, then Acting Assistant Attorney General for Antitrust Andrew Finch commented on the importance of the rule of law to principled antitrust enforcement. He noted:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

That is certainly not to say there has not been positive change in the antitrust laws in the past, or that we would have been better off without those changes. U.S. antitrust law has been refined, and occasionally recalibrated, with the courts playing their appropriate interpretive role. And enforcers must always be on the watch for new or evolving threats to competition.  As markets evolve and products develop over time, our analysis adapts. But as those changes occur, we pursue reliability and consistency in application in the antitrust laws as much as possible.

Indeed, we have enjoyed remarkable continuity and consensus for many years. Antitrust law in the U.S. has not been a “paradox” for quite some time, but rather a stable and valuable law enforcement regime with appropriately widespread support.

Unfortunately, policy decisions taken by the new Federal Trade Commission (FTC) leadership in recent weeks have rejected antitrust continuity and consensus. They have injected substantial uncertainty into the application of competition-law enforcement by the FTC. This abrupt change in emphasis undermines the rule of law and threatens to reduce economic welfare.

As of now, the FTC’s departure from the rule of law has been notable in two areas:

  1. Its rejection of previous guidance on the agency’s “unfair methods of competition” authority, the FTC’s primary non-merger-related enforcement tool; and
  2. Its new advice rejecting time limits for the review of generally routine proposed mergers.

In addition, potential FTC rulemakings directed at “unfair methods of competition” would, if pursued, prove highly problematic.

Rescission of the Unfair Methods of Competition Policy Statement

The FTC on July 1 voted 3-2 to rescind the 2015 FTC Policy Statement Regarding Unfair Methods of Competition under Section 5 of the FTC Act (UMC Policy Statement).

The bipartisan UMC Policy Statement has originally been supported by all three Democratic commissioners, including then-Chairwoman Edith Ramirez. The policy statement generally respected and promoted the rule of law by emphasizing that, in applying the facially broad “unfair methods of competition” (UMC) language, the FTC would be guided by the well-established principles of the antitrust rule of reason (including considering any associated cognizable efficiencies and business justifications) and the consumer welfare standard. The FTC also explained that it would not apply “standalone” Section 5 theories to conduct that would violate the Sherman or Clayton Acts.

In short, the UMC Policy Statement sent a strong signal that the commission would apply UMC in a manner fully consistent with accepted and well-understood antitrust policy principles. As in the past, the vast bulk of FTC Section 5 prosecutions would be brought against conduct that violated the core antitrust laws. Standalone Section 5 cases would be directed solely at those few practices that harmed consumer welfare and competition, but somehow fell into a narrow crack in the basic antitrust statutes (such as, perhaps, “invitations to collude” that lack plausible efficiency justifications). Although the UMC Statement did not answer all questions regarding what specific practices would justify standalone UMC challenges, it substantially limited business uncertainty by bringing Section 5 within the boundaries of settled antitrust doctrine.

The FTC’s announcement of the UMC Policy Statement rescission unhelpfully proclaimed that “the time is right for the Commission to rethink its approach and to recommit to its mandate to police unfair methods of competition even if they are outside the ambit of the Sherman or Clayton Acts.” As a dissenting statement by Commissioner Christine S. Wilson warned, consumers would be harmed by the commission’s decision to prioritize other unnamed interests. And as Commissioner Noah Joshua Phillips stressed in his dissent, the end result would be reduced guidance and greater uncertainty.

In sum, by suddenly leaving private parties in the dark as to how to conform themselves to Section 5’s UMC requirements, the FTC’s rescission offends the rule of law.

New Guidance to Parties Considering Mergers

For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:

  1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
  2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.

Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.

Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.

Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”

The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.

Perhaps more significantly, as three prominent antitrust practitioners point out, the FTC’s warning letters states that:

[T]he FTC may challenge deals that “threaten to reduce competition and harm consumers, workers, and honest businesses.” Adding in harm to both “workers and honest businesses” implies that the FTC may be considering more ways that transactions can have an adverse impact other than just harm to competition and consumers [citation omitted].

Because consensus antitrust merger analysis centers on consumer welfare, not the protection of labor or business interests, any suggestion that the FTC may be extending its reach to these new areas is inconsistent with established legal principles and generates new business-planning risks.

More generally, the Aug. 6 FTC “blog post could be viewed as an attempt to modify the temporal framework of the HSR Act”—in effect, an effort to displace an implicit statutory understanding in favor of an agency diktat, contrary to the rule of law. Commissioner Wilson sees the blog post as a means to keep investigations open indefinitely and, thus, an attack on the decades-old HSR framework for handling most merger reviews in an expeditious fashion (see here). Commissioner Phillips is concerned about an attempt to chill legal M&A transactions across the board, particularly unfortunate when there is no reason to conclude that particular transactions are illegal (see here).

Finally, the historical record raises serious questions about the “resource constraint” justification for the FTC’s new merger review policies:

Through the end of July 2021, more than 2,900 transactions were reported to the FTC. It is not clear, however, whether these record-breaking HSR filing numbers have led (or will lead) to more deals being investigated. Historically, only about 13 percent of all deals reported are investigated in some fashion, and roughly 3 percent of all deals reported receive a more thorough, substantive review through the issuance of a Second Request. Even if more deals are being reported, for the majority of transactions, the HSR process is purely administrative, raising no antitrust concerns, and, theoretically, uses few, if any, agency resources. [Citations omitted.]

Proposed FTC Competition Rulemakings

The new FTC leadership is strongly considering competition rulemakings. As I explained in a recent Truth on the Market post, such rulemakings would fail a cost-benefit test. They raise serious legal risks for the commission and could impose wasted resource costs on the FTC and on private parties. More significantly, they would raise two very serious economic policy concerns:

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules. [Emphasis added.]

In short, common law antitrust adjudication, focused on the consumer welfare standard, has done a good job of promoting a vibrant competitive economy in an efficient fashion. FTC competition rulemaking would not.

Conclusion

Recent FTC actions have undermined consensus antitrust-enforcement standards and have departed from established merger-review procedures with respect to seemingly uncontroversial consolidations. Those decisions have imposed costly uncertainty on the business sector and are thereby likely to disincentivize efficiency-seeking arrangements. What’s more, by implicitly rejecting consensus antitrust principles, they denigrate the primacy of the rule of law in antitrust enforcement. The FTC’s pursuit of competition rulemaking would further damage the rule of law by imposing arbitrary strictures that ignore matter-specific considerations bearing on the justifications for particular business decisions.

Fortunately, these are early days in the Biden administration. The problematic initial policy decisions delineated in this comment could be reversed based on further reflection and deliberation within the commission. Chairwoman Lina Khan and her fellow Democratic commissioners would benefit by consulting more closely with Commissioners Wilson and Phillips to reach agreement on substantive and procedural enforcement policies that are better tailored to promote consumer welfare and enhance vibrant competition. Such policies would benefit the U.S. economy in a manner consistent with the rule of law.

For a potential entrepreneur, just how much time it will take to compete, and the barrier to entry that time represents, will vary greatly depending on the market he or she wishes to enter. A would-be competitor to the likes of Subway, for example, might not find the time needed to open a sandwich shop to be a substantial hurdle. Even where it does take a long time to bring a product to market, it may be possible to accelerate the timeline if the potential profits are sufficiently high. 

As Steven Salop notes in a recent paper, however, there may be cases where long periods of production time are intrinsic to a product: 

If entry takes a long time, then the fear of entry may not provide a substantial constraint on conduct. The firm can enjoy higher prices and profits until the entry occurs. Even if a strong entrant into the 12-year-old scotch market begins the entry process immediately upon announcement of the merger of its rivals, it will not be able to constrain prices for a long time. [emphasis added]

Salop’s point relates to the supply-side substitutability of Scotch whisky (sic — Scotch whisky is spelt without an “e”). That is, to borrow from the European Commission’s definition, whether “suppliers are able to switch production to the relevant products and market them in the short term.” Scotch is aged in wooden barrels for a number of years (at least three, but often longer) before being bottled and sold, and the value of Scotch usually increases with age. 

Due to this protracted manufacturing process, Salop argues, an entrant cannot compete with an incumbent dominant firm for however many years it would take to age the Scotch; they cannot produce the relevant product in the short term, no matter how high the profits collected by a monopolist are, and hence no matter how strong the incentive to enter the market. If I wanted to sell 12-year-old Scotch, to use Salop’s example, it would take me 12 years to enter the market. In the meantime, a dominant firm could extract monopoly rents, leading to higher prices for consumers. 

But can a whisky producer “enjoy higher prices and profits until … entry occurs”? A dominant firm in the 12-year-old Scotch market will not necessarily be immune to competition for the entire 12-year period it would take to produce a Scotch of the same vintage. There are various ways, both on the demand and supply side, that pressure could be brought to bear on a monopolist in the Scotch market.

One way could be to bring whiskies that are being matured for longer-maturity bottles (like 16- or 18-year-old Scotches) into service at the 12-year maturity point, shifting this supply to a market in which profits are now relatively higher. 

Alternatively, distilleries may try to produce whiskies that resemble 12-year old whiskies in flavor with younger batches. A 2013 article from The Scotsman discusses this possibility in relation to major Scottish whisky brand Macallan’s decision to switch to selling exclusively No-Age Statement (NAS — they do not bear an age on the bottle) whiskies: 

Experts explained that, for example, nine and 11-year-old whiskies—not yet ready for release under the ten and 12-year brands—could now be blended together to produce the “entry-level” Gold whisky immediately.

An aged Scotch cannot contain any whisky younger than the age stated on the bottle, but an NAS alternative can contain anything over three years (though older whiskies are often used to capture a flavor more akin to a 12-year dram). For many drinkers, NAS whiskies are a close substitute for 12-year-old whiskies. They often compete with aged equivalents on quality and flavor and can command similar prices to aged bottles in the 12-year category. More than 80% of bottles sold bear no age statement. While this figure includes non-premium bottles, the share of NAS whiskies traded at auction on the secondary market, presumably more likely to be premium, increased from 20% to 30% in the years between 2013 and 2018.

There are also whiskies matured outside of Scotland, in regions such as Taiwan and India, that can achieve flavor profiles akin to older whiskies more quickly, thanks to warmer climates and the faster chemical reactions inside barrels they cause. Further increases in maturation rate can be brought about by using smaller barrels with a higher surface-area-to-volume ratio. Whiskies matured in hotter climates and smaller barrels can be brought to market even more quickly than NAS Scotch matured in the cooler Scottish climate, and may well represent a more authentic replication of an older barrel. 

“Whiskies” that can be manufactured even more quickly may also be on the horizon. Some startups in the United States are experimenting with rapid-aging technology which would allow them to produce a whisky-like spirit in a very short amount of time. As detailed in a recent article in The Economist, Endless West in California is using technology that ages spirits within 24 hours, with the resulting bottles selling for $40 – a bit less than many 12-year-old Scotches. Although attempts to break the conventional maturation process are nothing new, recent attempts have won awards in blind taste-test competitions.

None of this is to dismiss Salop’s underlying point. But it may suggest that, even for a product where time appears to be an insurmountable barrier to entry, there may be more ways to compete than we initially assume.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

From Sen. Elizabeth Warren (D-Mass.) to Sen. Josh Hawley (R-Mo.), populist calls to “fix” our antitrust laws and the underlying Consumer Welfare Standard have found a foothold on Capitol Hill. At the same time, there are calls to “fix” the Supreme Court by packing it with new justices. The court’s unanimous decision in NCAA v. Alston demonstrates that neither needs repair. To the contrary, clearly anti-competitive conduct—like the NCAA’s compensation rules—is proscribed under the Consumer Welfare Standard, and every justice from Samuel Alito to Sonia Sotomayor can agree on that.

In 1984, the court in NCAA v. Board of Regents suggested that “courts should take care when assessing the NCAA’s restraints on student-athlete compensation.” After all, joint ventures like sports leagues are entitled to rule-of-reason treatment. But while times change, the Consumer Welfare Standard is sufficiently flexible to meet those changes.

Where a competitive restraint exists primarily to ensure that “enormous sums of money flow to seemingly everyone except the student athletes,” the court rightly calls it out for what it is. As Associate Justice Brett Kavanaugh wrote in his concurrence:

Nowhere else in America can businesses get away with agreeing not to pay their workers a fair market rate on the theory that their product is defined by not paying their workers a fair market rate.  And under ordinary principles of antitrust law, it is not evident why college sports should be any different.  The NCAA is not above the law.

Disturbing these “ordinary principles”—whether through legislation, administrative rulemaking, or the common law—is simply unnecessary. For example, the Open Markets Institute filed an amicus brief arguing that the rule of reason should be “bounded” and willfully blind to the pro-competitive benefits some joint ventures can create (an argument that has been used, unsuccessfully, to attack ridesharing services like Uber and Lyft). Sen. Amy Klobuchar (D-Minn.) has proposed shifting the burden of proof so that merging parties are guilty until proven innocent. Sen. Warren would go further, deeming Amazon’s acquisition of Whole Foods anti-competitive simply because the company is “big,” and ignoring the merger’s myriad pro-competitive benefits. Sen. Hawley has gone further still: calling on Amazon to be investigated criminally for the crime of being innovative and successful.

Several of the current proposals, including those from Sens. Klobuchar and Hawley (and those recently introduced in the House that essentially single out firms for disfavored treatment), would replace the Consumer Welfare Standard that has underpinned antitrust law for decades with a policy that effectively punishes firms for being politically unpopular.

These examples demonstrate we should be wary when those in power assert that things are so irreparably broken that they need a complete overhaul. The “solutions” peddled usually increase politicians’ power by enabling them to pick winners and losers through top-down approaches that stifle the bottom-up innovations that make consumers’ lives better.

Are antitrust law and the Supreme Court perfect? Hardly. But in a 9-0 decision, the court proved this week that there’s nothing broken about either.