Archives For antitrust enforcement

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

Advocates of legislative action to “reform” antitrust law have already pointed to the U.S. District Court for the District of Columbia’s dismissal of the state attorneys general’s case and the “conditional” dismissal of the Federal Trade Commission’s case against Facebook as evidence that federal antitrust case law is lax and demands correction. In fact, the court’s decisions support the opposite implication. 

The Risks of Antitrust by Anecdote

The failure of a well-resourced federal regulator, and more than 45 state attorney-general offices, to avoid dismissal at an early stage of the litigation testifies to the dangers posed by a conclusory approach toward antitrust enforcement that seeks to unravel acquisitions consummated almost a decade ago without even demonstrating the factual predicates to support consideration of such far-reaching interventions. The dangers to the rule of law are self-evident. Irrespective of one’s views on the appropriate direction of antitrust law, this shortcut approach would substitute prosecutorial fiat, ideological predilection, and popular sentiment for decades of case law and agency guidelines grounded in the rigorous consideration of potential evidence of competitive harm. 

The paucity of empirical support for the exceptional remedial action sought by the FTC is notable. As the district court observed, there was little systematic effort made to define the economically relevant market or provide objective evidence of market power, beyond the assertion that Facebook has a market share of “in excess of 60%.” Remarkably, the denominator behind that 60%-plus assertion is not precisely defined, since the FTC’s brief does not supply any clear metric by which to measure market share. As the court pointed out, this is a nontrivial task in multi-sided environments in which one side of the potentially relevant market delivers services to users at no charge.  

While the point may seem uncontroversial, it is important to re-appreciate why insisting on a rigorous demonstration of market power is critical to preserving a coherent body of law that provides the market with a basis for reasonably anticipating the likelihood of antitrust intervention. At least since the late 1970s, courts have recognized that “big is not always bad” and can often yield cost savings that ultimately redound to consumers’ benefit. That is: firm size and consumer welfare do not stand in inherent opposition. If courts were to abandon safeguards against suits that cannot sufficiently define the relevant market and plausibly show market power, antitrust litigation could easily be used as a tool to punish successful firms that prevail over competitors simply by being more efficient. In other words: antitrust law could become a tool to preserve competitor welfare at the expense of consumer welfare.

The Specter of No-Fault Antitrust Liability

The absence of any specific demonstration of market power suggests deficient lawyering or the inability to gather supporting evidence. Giving the FTC litigation team the benefit of the doubt, the latter becomes the stronger possibility. If that is the case, this implies an effort to persuade courts to adopt a de facto rule of per se illegality for any firm that achieves a certain market share. (The same concept lies behind legislative proposals to bar acquisitions for firms that cross a certain revenue or market capitalization threshold.) Effectively, any firm that reached a certain size would operate under the presumption that it has market power and has secured or maintained such power due to anticompetitive practices, rather than business prowess. This would effectively convert leading digital platforms into quasi-public utilities subject to continuous regulatory intervention. Such an approach runs counter to antitrust law’s mission to preserve, rather than displace, private ordering by market forces.  

Even at the high-water point of post-World War II antitrust zealotry (a period that ultimately ended in economic malaise), proposals to adopt a rule of no-fault liability for alleged monopolization were rejected. This was for good reason. Any such rule would likely injure consumers by precluding them from enjoying the cost savings that result from the “sweet spot” scenario in which the scale and scope economies of large firms are combined with sufficiently competitive conditions to yield reduced prices and increased convenience for consumers. Additionally, any such rule would eliminate incumbents’ incentives to work harder to offer consumers reduced prices and increased convenience, since any market share preserved or acquired as a result would simply invite antitrust scrutiny as a reward.

Remembering Why Market Power Matters

To be clear, this is not to say that “Big Tech” does not deserve close antitrust scrutiny, does not wield market power in certain segments, or has not potentially engaged in anticompetitive practices.  The fundamental point is that assertions of market power and anticompetitive conduct must be demonstrated, rather than being assumed or “proved” based largely on suggestive anecdotes.  

Perhaps market power will be shown sufficiently in Facebook’s case if the FTC elects to respond to the court’s invitation to resubmit its brief with a plausible definition of the relevant market and indication of market power at this stage of the litigation. If that threshold is satisfied, then thorough consideration of the allegedly anticompetitive effect of Facebook’s WhatsApp and Instagram acquisitions may be merited. However, given the policy interest in preserving the market’s confidence in relying on the merger-review process under the Hart-Scott-Rodino Act, the burden of proof on the government should be appropriately enhanced to reflect the significant time that has elapsed since regulatory decisions not to intervene in those transactions.  

It would once have seemed mundane to reiterate that market power must be reasonably demonstrated to support a monopolization claim that could lead to a major divestiture remedy. Given the populist thinking that now leads much of the legislative and regulatory discussion on antitrust policy, it is imperative to reiterate the rationale behind this elementary principle. 

This principle reflects the fact that, outside collusion scenarios, antitrust law is typically engaged in a complex exercise to balance the advantages of scale against the risks of anticompetitive conduct. At its best, antitrust law weighs competing facts in a good faith effort to assess the net competitive harm posed by a particular practice. While this exercise can be challenging in digital markets that naturally converge upon a handful of leading platforms or multi-dimensional markets that can have offsetting pro- and anti-competitive effects, these are not reasons to treat such an exercise as an anachronistic nuisance. Antitrust cases are inherently challenging and proposed reforms to make them easier to win are likely to endanger, rather than preserve, competitive markets.

The recent launch of the international Multilateral Pharmaceutical Merger Task Force (MPMTF) is just the latest example of burgeoning cooperative efforts by leading competition agencies to promote convergence in antitrust enforcement. (See my recent paper on the globalization of antitrust, which assesses multinational cooperation and convergence initiatives in greater detail.) In what is a first, the U.S. Federal Trade Commission (FTC), the U.S. Justice Department’s (DOJ) Antitrust Division, offices of state Attorneys General, the European Commission’s Competition Directorate, Canada’s Competition Bureau, and the U.K.’s Competition and Market Authority (CMA) jointly created the MPMTF in March 2021 “to update their approach to analyzing the effects of pharmaceutical mergers.”

To help inform its analysis, in May 2021 the MPMTF requested public comments concerning the effects of pharmaceutical mergers. The MPMTF sought submissions regarding (among other issues) seven sets of questions:   

  1. What theories of harm should enforcement agencies consider when evaluating pharmaceutical mergers, including theories of harm beyond those currently considered?
  2. What is the full range of a pharmaceutical merger’s effects on innovation? What challenges arise when mergers involve proprietary drug discovery and manufacturing platforms?
  3. In pharmaceutical merger review, how should we consider the risks or effects of conduct such as price-setting practices, reverse payments, and other ways in which pharmaceutical companies respond to or rely on regulatory processes?
  4. How should we approach market definition in pharmaceutical mergers, and how is that implicated by new or evolving theories of harm?
  5. What evidence may be relevant or necessary to assess and, if applicable, challenge a pharmaceutical merger based on any new or expanded theories of harm?
  6. What types of remedies would work in the cases to which those theories are applied?
  7. What factors, such as the scope of assets and characteristics of divestiture buyers, influence the likelihood and success of pharmaceutical divestitures to resolve competitive concerns?

My research assistant Andrew Mercado and I recently submitted comments for the record addressing the questions posed by the MPMTF. We concluded:

Federal merger enforcement in general and FTC pharmaceutical merger enforcement in particular have been effective in promoting competition and consumer welfare. Proposed statutory amendments to strengthen merger enforcement not only are unnecessary, but also would, if enacted, tend to undermine welfare and would thus be poor public policy. A brief analysis of seven questions propounded by the Multilateral Pharmaceutical Merger Task Force suggests that: (a) significant changes in enforcement policies are not warranted; and (b) investigators should employ sound law and economics analysis, taking full account of merger-related efficiencies, when evaluating pharmaceutical mergers. 

While we leave it to interested readers to review our specific comments, this commentary highlights one key issue which we stressed—the importance of giving due weight to efficiencies (and, in particular, dynamic efficiencies) in evaluating pharma mergers. We also note an important critique by FTC Commissioner Christine Wilson of the treatment accorded merger-related efficiencies by U.S. antitrust enforcers.   

Discussion

Innovation in pharmaceuticals and vaccines has immensely significant economic and social consequences, as demonstrated most recently in the handling of the COVID-19 pandemic. As such, it is particularly important that public policy not stand in the way of realizing efficiencies that promote innovation in these markets. This observation applies directly, of course, to pharmaceutical antitrust enforcement, in general, and to pharma merger enforcement, in particular.

Regrettably, however, though general merger-enforcement policy has been generally sound, it has somewhat undervalued merger-related efficiencies.

Although U.S. antitrust enforcers give lip service to their serious consideration of efficiencies in merger reviews, the reality appears to be quite different, as documented by Commissioner Wilson in a 2020 speech.

Wilson’s General Merger-Efficiencies Critique: According to Wilson, the combination of finding narrow markets and refusing to weigh out-of-market efficiencies has created major “legal and evidentiary hurdles a defendant must clear when seeking to prove offsetting procompetitive efficiencies.” What’s more, the “courts [have] largely continue[d] to follow the Agencies’ lead in minimizing the importance of efficiencies.” Wilson shows that “the Horizontal Merger Guidelines text and case law appear to set different standards for demonstrating harms and efficiencies,” and argues that this “asymmetric approach has the obvious potential consequence of preventing some procompetitive mergers that increase consumer welfare.” Wilson concludes on a more positive note that this problem can be addressed by having enforcers: (1) treat harms and efficiencies symmetrically; and (2) establish clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.

While our filing with the MPMTF did not discuss Wilson’s general treatment of merger efficiencies, one would hope that the task force will appropriately weigh it in its deliberations. Our filing instead briefly addressed two “informational efficiencies” that may arise in the context of pharmaceutical mergers. These include:

More Efficient Resource Reallocation: The theory of the firm teaches that mergers may be motivated by the underutilization or misallocation of assets, or the opportunity to create welfare-enhancing synergies. In the pharmaceutical industry, these synergies may come from joining complementary research and development programs, combining diverse and specialized expertise that may be leveraged for better, faster drug development and more innovation.

Enhanced R&D: Currently, much of the R&D for large pharmaceutical companies is achieved through partnerships or investment in small biotechnology and research firms specializing in a single type of therapy. Whereas large pharmaceutical companies have expertise in marketing, navigating regulation, and undertaking trials of new drugs, small, research-focused firms can achieve greater advancements in medicine with smaller budgets. Furthermore, changes within firms brought about by a merger may increase innovation.

With increases in intellectual property and proprietary data that come from the merging of two companies, smaller research firms that work with the merged entity may have access to greater pools of information, enhancing the potential for innovation without increasing spending. This change not only raises the efficiency of the research being conducted in these small firms, but also increases the probability of a breakthrough without an increase in risk.

Conclusion

U.S. pharmaceutical merger enforcement has been fairly effective in forestalling anticompetitive combinations while allowing consumer welfare-enhancing transactions to go forward. Policy in this area should remain generally the same. Enforcers should continue to base enforcement decisions on sound economic theory fully supported by case-specific facts. Enforcement agencies could benefit, however, by placing a greater emphasis on efficiencies analysis. In particular, they should treat harms and efficiencies symmetrically (as recommend by Commissioner Wilson), and fully take into account likely resource reallocation and innovation-related efficiencies. 

The U.S. Supreme Court’s just-published unanimous decision in AMG Capital Management LLC v. FTC—holding that Section 13(b) of the Federal Trade Commission Act does not authorize the commission to obtain court-ordered equitable monetary relief (such as restitution or disgorgement)—is not surprising. Moreover, by dissipating the cloud of litigation uncertainty that has surrounded the FTC’s recent efforts to seek such relief, the court cleared the way for consideration of targeted congressional legislation to address the issue.

But what should such legislation provide? After briefly summarizing the court’s holding, I will turn to the appropriate standards for optimal FTC consumer redress actions, which inform a welfare-enhancing legislative fix.

The Court’s Opinion

Justice Stephen Breyer’s opinion for the court is straightforward, centering on the structure and history of the FTC Act. Section 13(b) makes no direct reference to monetary relief. Its plain language merely authorizes the FTC to seek a “permanent injunction” in federal court against “any person, partnership, or corporation” that it believes “is violating, or is about to violate, any provision of law” that the commission enforces. In addition, by its terms, Section 13(b) is forward-looking, focusing on relief that is prospective, not retrospective (this cuts against the argument that payments for prior harm may be recouped from wrongdoers).

Furthermore, the FTC Act provisions that specifically authorize conditioned and limited forms of monetary relief (Section 5(l) and Section 19) are in the context of commission cease and desist orders, involving FTC administrative proceedings, unlike Section 13(b) actions that avoid the administrative route. In sum, the court concludes that:

[T]o read §13(b) to mean what it says, as authorizing injunctive but not monetary relief, produces a coherent enforcement scheme: The Commission may obtain monetary relief by first invoking its administrative procedures and then §19’s redress provisions (which include limitations). And the Commission may use §13(b) to obtain injunctive relief while administrative proceedings are foreseen or in progress, or when it seeks only injunctive relief. By contrast, the Commission’s broad reading would allow it to use §13(b) as a substitute for §5 and §19. For the reasons we have just stated, that could not have been Congress’ intent.

The court’s opinion concludes by succinctly rejecting the FTC’s arguments to the contrary.

What Comes Next

The Supreme Court’s decision has been anticipated by informed observers. All four sitting FTC Commissioners have already called for a Section 13(b) “legislative fix,” and in an April 20 hearing of Senate Commerce Committee, Chairwoman Maria Cantwell (D-Wash.) emphasized that, “[w]e have to do everything we can to protect this authority and, if necessary, pass new legislation to do so.”

What, however, should be the contours of such legislation? In considering alternative statutory rules, legislators should keep in mind not only the possible consumer benefits of monetary relief, but the costs of error, as well. Error costs are a ubiquitous element of public law enforcement, and this is particularly true in the case of FTC actions. Ideally, enforcers should seek to minimize the sum of the costs attributable to false positives (type I error), false negatives (type II error), administrative costs, and disincentive costs imposed on third parties, which may also be viewed as a subset of false positives. (See my 2014 piece “A Cost-Benefit Framework for Antitrust Enforcement Policy.”

Monetary relief is most appropriate in cases where error costs are minimal, and the quantum of harm is relatively easy to measure. This suggests a spectrum of FTC enforcement actions that may be candidates for monetary relief. Ideally, selection of targets for FTC consumer redress actions should be calibrated to yield the highest return to scarce enforcement resources, with an eye to optimal enforcement criteria.

Consider consumer protection enforcement. The strongest cases involve hardcore consumer fraud (where fraudulent purpose is clear and error is almost nil); they best satisfy accuracy in measurement and error-cost criteria. Next along the spectrum are cases of non-fraudulent but unfair or deceptive acts or practices that potentially involve some degree of error. In this category, situations involving easily measurable consumer losses (e.g., systematic failure to deliver particular goods requested or poor quality control yielding shipments of ruined goods) would appear to be the best candidates for monetary relief.

Moving along the spectrum, matters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

For example, consider assigning a consumer welfare loss number to a patent antitrust settlement that may or may not have delayed entry of a generic drug by some length of time (depending upon the strength of the patent) or to a decision by a drug company to modify a drug slightly just before patent expiration in order to obtain a new patent period (raising questions of valuing potential product improvements). These and other examples suggest that only rarely should the FTC pursue requests for disgorgement or restitution in antitrust cases, if error-cost-centric enforcement criteria are to be honored.

Unfortunately, the FTC currently has nothing to say about when it will seek monetary relief in antitrust matters. Commendably, in 2003, the commission issued a Policy Statement on Monetary Equitable Remedies in Competition Cases specifying that it would only seek monetary relief in “exceptional cases” involving a “[c]lear [v]iolation” of the antitrust laws. Regrettably, in 2012, a majority of the FTC (with Commissioner Maureen Ohlhausen dissenting) withdrew that policy statement and the limitations it imposed. As I concluded in a 2012 article:

This action, which was taken without the benefit of advance notice and public comment, raises troubling questions. By increasing business uncertainty, the withdrawal may substantially chill efficient business practices that are not well understood by enforcers. In addition, it raises the specter of substantial error costs in the FTC’s pursuit of monetary sanctions. In short, it appears to represent a move away from, rather than towards, an economically enlightened antitrust enforcement policy.

In a 2013 speech, then-FTC Commissioner Josh Wright also lamented the withdrawal of the 2003 Statement, and stated that he would limit:

… the FTC’s ability to pursue disgorgement only against naked price fixing agreements among competitors or, in the case of single firm conduct, only if the monopolist’s conduct has no plausible efficiency justification. This latter category would include fraudulent or deceptive conduct, or tortious activity such as burning down a competitor’s plant.

As a practical matter, the FTC does not bring cases of this sort. The DOJ brings naked price-fixing cases and the unilateral conduct cases noted are as scarce as unicorns. Given that fact, Wright’s recommendation may rightly be seen as a rejection of monetary relief in FTC antitrust cases. Based on the previously discussed serious error-cost and measurement problems associated with monetary remedies in FTC antitrust cases, one may also conclude that the Wright approach is right on the money.

Finally, a recent article by former FTC Chairman Tim Muris, Howard Beales, and Benjamin Mundel opined that Section 13(b) should be construed to “limit[] the FTC’s ability to obtain monetary relief to conduct that a reasonable person would know was dishonest or fraudulent.” Although such a statutory reading is now precluded by the Supreme Court’s decision, its incorporation in a new statutory “fix” would appear ideal. It would allow for consumer redress in appropriate cases, while avoiding the likely net welfare losses arising from a more expansive approach to monetary remedies.

 Conclusion

The AMG Capital decision is sure to generate legislative proposals to restore the FTC’s ability to secure monetary relief in federal court. If Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices. Giving the FTC carte blanche to obtain financial recoveries in the full spectrum of antitrust and consumer protection cases would spawn uncertainty and could chill a great deal of innovative business behavior, to the ultimate detriment of consumer welfare.


,

Antitrust by Fiat

Jonathan M. Barnett —  23 February 2021

The Competition and Antitrust Law Enforcement Reform Act (CALERA), recently introduced in the U.S. Senate, exhibits a remarkable willingness to cast aside decades of evidentiary standards that courts have developed to uphold the rule of law by precluding factually and economically ungrounded applications of antitrust law. Without those safeguards, antitrust enforcement is prone to be driven by a combination of prosecutorial and judicial fiat. That would place at risk the free play of competitive forces that the antitrust laws are designed to protect.

Antitrust law inherently lends itself to the risk of erroneous interpretations of ambiguous evidence. Outside clear cases of interfirm collusion, virtually all conduct that might appear anti-competitive might just as easily be proven, after significant factual inquiry, to be pro-competitive. This fundamental risk of a false diagnosis has guided antitrust case law and regulatory policy since at least the Supreme Court’s landmark Continental Television v. GTE Sylvania decision in 1977 and arguably earlier. Judicial and regulatory efforts to mitigate this ambiguity, while preserving the deterrent power of the antitrust laws, have resulted in the evidentiary requirements that are targeted by the proposed bill.

Proponents of the legislative “reforms” might argue that modern antitrust case law’s careful avoidance of enforcement error yields excessive caution. To relieve regulators and courts from having to do their homework before disrupting a targeted business and its employees, shareholders, customers and suppliers, the proposed bill empowers plaintiffs to allege and courts to “find” anti-competitive conduct without having to be bound to the reasonably objective metrics upon which courts and regulators have relied for decades. That runs the risk of substituting rhetoric and intuition for fact and analysis as the guiding principles of antitrust enforcement and adjudication.

This dismissal of even a rudimentary commitment to rule-of-law principles is illustrated by two dramatic departures from existing case law in the proposed bill. Each constitutes a largely unrestrained “blank check” for regulatory and judicial overreach.

Blank Check #1

The bill includes a broad prohibition on “exclusionary” conduct, which is defined to include any conduct that “materially disadvantages 1 or more actual or potential competitors” and “presents an appreciable risk of harming competition.” That amorphous language arguably enables litigants to target a firm that offers consumers lower prices but “disadvantages” less efficient competitors that cannot match that price.

In fact, the proposed legislation specifically facilitates this litigation strategy by relieving predatory pricing claims from having to show that pricing is below cost or likely to result ultimately in profits for the defendant. While the bill permits a defendant to escape liability by showing sufficiently countervailing “procompetitive benefits,” the onus rests on the defendant to show otherwise. This burden-shifting strategy encourages lagging firms to shift competition from the marketplace to the courthouse.

Blank Check #2

The bill then removes another evidentiary safeguard by relieving plaintiffs from always having to define a relevant market. Rather, it may be sufficient to show that the contested practice gives rise to an “appreciable risk of harming competition … based on the totality of the circumstances.” It is hard to miss the high degree of subjectivity in this standard.

This ambiguous threshold runs counter to antitrust principles that require a credible showing of market power in virtually all cases except horizontal collusion. Those principles make perfect sense. Market power is the gateway concept that enables courts to distinguish between claims that plausibly target alleged harms to competition and those that do not. Without a well-defined market, it is difficult to know whether a particular practice reflects market power or market competition. Removing the market power requirement can remove any meaningful grounds on which a defendant could avoid a nuisance lawsuit or contest or appeal a conclusory allegation or finding of anticompetitive conduct.

Anti-Market Antitrust

The bill’s transparently outcome-driven approach is likely to give rise to a cloud of liability that penalizes businesses that benefit consumers through price and quality combinations that competitors cannot replicate. This obviously runs directly counter to the purpose of the antitrust laws. Certainly, winners can and sometimes do entrench themselves through potentially anticompetitive practices that should be closely scrutinized. However, the proposed legislation seems to reflect a presumption that successful businesses usually win by employing illegitimate tactics, rather than simply being the most efficient firm in the market. Under that assumption, competition law becomes a tool for redoing, rather than enabling, competitive outcomes.

While this populist approach may be popular, it is neither economically sound nor consistent with a market-driven economy in which resources are mostly allocated through pricing mechanisms and government intervention is the exception, not the rule. It would appear that some legislators would like to reverse that presumption. Far from being a victory for consumers, that outcome would constitute a resounding loss.

The slew of recent antitrust cases in the digital, tech, and pharmaceutical industries has brought significant attention to the investments many firms in these industries make in “intangibles,” such as software and research and development (R&D).

Intangibles are recognized to have an important effect on a company’s (and the economy’s) performance. For example, Jonathan Haskel and Stian Westlake (2017) highlight the increasingly large investments companies have been making in things like programming in-house software, organizational structures, and, yes, a firm’s stock of knowledge obtained through R&D. They also note the considerable difficulties associated with valuing both those investments and the outcomes (such as new operational procedures, a new piece of software, or a new patent) of those investments.

This difficulty in valuing intangibles has gone somewhat under the radar until relatively recently. There has been progress in valuing them at the aggregate level (see Ellen R. McGrattan and Edward C. Prescott (2008)) and in examining their effects at the level of individual sectors (see McGrattan (2020)). It remains difficult, however, to ascertain the value of the entire stock of intangibles held by an individual firm.

There is a method to estimate the value of one component of a firm’s stock of intangibles. Specifically, the “stock of knowledge obtained through research and development” is likely to form a large proportion of most firms’ intangibles. Treating R&D as a “stock” might not be the most common way to frame the subject, but it does have an intuitive appeal.

What a firm knows (i.e., its intellectual property) is an input to its production process, just like physical capital. The most direct way for firms to acquire knowledge is to conduct R&D, which adds to its “stock of knowledge,” as represented by its accumulated stock of R&D. In this way, a firm’s accumulated investment in R&D then becomes a stock of R&D that it can use in production of whatever goods and services it wants. Thankfully, there is a relatively straightforward (albeit imperfect) method to measure a firm’s stock of R&D that relies on information obtained from a company’s accounts, along with a few relatively benign assumptions.

This method (set out by Bronwyn Hall (1990, 1993)) uses a firm’s annual expenditures on R&D (a separate line item in most company accounts) in the “perpetual inventory” method to calculate a firm’s stock of R&D in any particular year. This perpetual inventory method is commonly used to estimate a firm’s stock of physical capital, so applying it to obtain an estimate of a firm’s stock of knowledge—i.e., their stock of R&D—should not be controversial.

All this method requires to obtain a firm’s stock of R&D for this year is knowledge of a firm’s R&D stock and its investment in R&D (i.e., its R&D expenditures) last year. This year’s R&D stock is then the sum of those R&D expenditures and its undepreciated R&D stock that is carried forward into this year.

As some R&D expenditure datasets include, for example, wages paid to scientists and research workers, this is not exactly the same as calculating a firm’s physical capital stock, which would only use a firm’s expenditures on physical capital. But given that paying people to perform R&D also adds to a firm’s stock of R&D through the increased knowledge and expertise of their employees, it seems reasonable to include this in a firm’s stock of R&D.

As mentioned previously, this method requires making certain assumptions. In particular, it is necessary to assume a rate of depreciation of the stock of R&D each period. Hall suggests a depreciation of 15% per year (compared to the roughly 7% per year for physical capital), and estimates presented by Hall, along with Wendy Li (2018), suggest that, in some industries, the figure can be as high as 50%, albeit with a wide range across industries.

The other assumption required for this method is an estimate of the firm’s initial level of stock. To see why such an assumption is necessary, suppose that you have data on a firm’s R&D expenditure running from 1990-2016. This means that you can calculate a firm’s stock of R&D for each year once you have their R&D stock in the previous year via the formula above.

When calculating the firm’s R&D stock for 2016, you need to know what their R&D stock was in 2015, while to calculate their R&D stock for 2015 you need to know their R&D stock in 2014, and so on backward until you reach the first year for which you have data: in this, case 1990.

However, working out the firm’s R&D stock in 1990 requires data on the firm’s R&D stock in 1989. The dataset does not contain any information about 1989, nor the firm’s actual stock of R&D in 1990. Hence, it is necessary to make an assumption regarding the firm’s stock of R&D in 1990.

There are several different assumptions one can make regarding this “starting value.” You could assume it is just a very small number. Or you can assume, as per Hall, that it is the firm’s R&D expenditure in 1990 divided by the sum of the R&D depreciation and average growth rates (the latter being taken as 8% per year by Hall). Note that, given the high depreciation rates for the stock of R&D, it turns out that the exact starting value does not matter significantly (particularly in years toward the end of the dataset) if you have a sufficiently long data series. At a 15% depreciation rate, more than 50% of the initial value disappears after five years.

Although there are other methods to measure a firm’s stock of R&D, these tend to provide less information or rely on stronger assumptions than the approach described above does. For example, sometimes a firm’s stock of R&D is measured using a simple count of the number of patents they hold. However, this approach does not take into account the “value” of a patent. Since, by definition, each patent is unique (with differing number of years to run, levels of quality, ability to be challenged or worked around, and so on), it is unlikely to be appropriate to use an “average value of patents sold recently” to value it. At least with the perpetual inventory method described above, a monetary value for a firm’s stock of R&D can be obtained.

The perpetual inventory method also provides a way to calculate market shares of R&D in R&D-intensive industries, which can be used alongside current measures. This would be akin to looking at capacity shares in some manufacturing industries. Of course, using market shares in R&D industries can be fraught with issues, such as whether it is appropriate to use a backward-looking measure to assess competitive constraints in a forward-looking industry. This is why any investigation into such industries should also look, for example, at a firm’s research pipeline.

Naturally, this only provides for the valuation of the R&D stock and says nothing about valuing other intangibles that are likely to play an important role in a much wider range of industries. Nonetheless, this method could provide another means for competition authorities to assess the current and historical state of R&D stocks in industries in which R&D plays an important part. It would be interesting to see what firms’ shares of R&D stocks look like, for example, in the pharmaceutical and tech industries.

In the hands of a wise philosopher-king, the Sherman Act’s hard-to-define prohibitions of “restraints of trade” and “monopolization” are tools that will operate inevitably to advance the public interest in competitive markets. In the hands of real-world litigators, regulators and judges, those same words can operate to advance competitors’ private interests in securing commercial advantages through litigation that could not be secured through competition in the marketplace. If successful, this strategy may yield outcomes that run counter to antitrust law’s very purpose.

The antitrust lawsuit filed by Epic Games against Apple in August 2020, and Apple’s antitrust lawsuit against Qualcomm (settled in April 2019), suggest that antitrust law is heading in this unfortunate direction.

From rent-minimization to rent-maximization

The first step in converting antitrust law from an instrument to minimize rents to an instrument to maximize rents lies in expanding the statute’s field of application on the apparently uncontroversial grounds of advancing the public interest in “vigorous” enforcement. In surprisingly short order, this largely unbounded vision of antitrust’s proper scope has become the dominant fashion in policy discussions, at least as expressed by some legislators, regulators, and commentators.

Following the new conventional wisdom, antitrust law has pursued over the past decades an overly narrow path, consequently overlooking and exacerbating a panoply of social ills that extend well beyond the mission to “merely” protect the operation of the market pricing mechanism. This line of argument is typically coupled with the assertion that courts, regulators and scholars have been led down this path by incumbents that welcome the relaxed scrutiny of a purportedly deferential antitrust policy.

This argument, and related theory of regulatory capture, has things roughly backwards.

Placing antitrust law at the service of a largely undefined range of social purposes set by judicial and regulatory fiat threatens to render antitrust a tool that can be easily deployed to favor the private interests of competitors rather than the public interest in competition. Without the intellectual discipline imposed by the consumer welfare standard (and, outside of per se illegal restraints, operationalized through the evidentiary requirement of competitive harm), the rhetoric of antitrust provides excellent cover for efforts to re-engineer the rules of the game in lieu of seeking to win the game as it has been played.

Epic Games v. Apple

A nascent symptom of this expansive form of antitrust is provided by the much-publicized lawsuit brought by Epic Games, the maker of the wildly popular video game, Fortnite, against Apple, the operator of the even more wildly popular App Store. On August 13, 2020, Epic added a “direct” payment processing services option to its Fortnite game, which violated the developer terms of use that govern the App Store. In response, Apple exercised its contractual right to remove Fortnite from the App Store, triggering Fortnite’s antitrust suit. The same sequence has ensued between Epic Games and Google in connection with the Google Play Store. Both litigations are best understood as a breach of contract dispute cloaked in the guise of an antitrust cause of action.

In suggesting that a jury trial would be appropriate in Epic Games’ suit against Apple, the district court judge reportedly stated that the case is “on the frontier of antitrust law” and [i]t is important enough to understand what real people think.” That statement seems to suggest that this is a close case under antitrust law. I respectfully disagree. Based on currently available information and applicable law, Epic’s argument suffers from two serious vulnerabilities that would seem to be difficult for the plaintiff to overcome.

A contestably narrow market definition

Epic states three related claims: (1) Apple has a monopoly in the relevant market, defined as the App Store, (2) Apple maintains its monopoly by contractually precluding developers from distributing iOS-compatible versions of their apps outside the App Store, and (3) Apple maintains a related monopoly in the payment processing services market for the App Store by contractually requiring developers to use Apple’s processing service.

This market definition, and the associated chain of reasoning, is subject to significant doubt, both as a legal and factual matter.

Epic’s narrow definition of the relevant market as the App Store (rather than app distribution platforms generally) conveniently results in a 100% market share for Apple. Inconveniently, federal case law is generally reluctant to adopt single-brand market definitions. While the Supreme Court recognized in 1992 a single-brand market in Eastman Kodak Co. v. Image Technical Services, the case is widely considered to be an outlier in light of subsequent case law. As a federal district court observed in Spahr v. Leegin Creative Leather Products (E.D. Tenn. 2008): “Courts have consistently refused to consider one brand to be a relevant market of its own when the brand competes with other potential substitutes.”

The App Store would seem to fall into this typical category. The customer base of existing and new Fortnite users can still accessthe gamethrough multiple platforms and on multiple devices other than the iPhone, including a PC, laptop, game console, and non-Apple mobile devices. (While Google has also removed Fortnite from the Google Play store due to the added direct payment feature, users can, at some inconvenience, access the game manually on Android phones.)

Given these alternative distribution channels, it is at a minimum unclear whether Epic is foreclosed from reaching a substantial portion of its consumer base, which may already access the game on alternative platforms or could potentially do so at moderate incremental transaction costs. In the language of platform economics, it appears to be technologically and economically feasible for the target consumer base to “multi-home.” If multi-homing and related switching costs are low, even a 100% share of the App Store submarket would not translate into market power in the broader and potentially more economically relevant market for app distribution generally.

An implausible theory of platform lock-in

Even if it were conceded that the App Store is the relevant market, Epic’s claim is not especially persuasive, both as an economic and a legal matter. That is because there is no evidence that Apple is exploiting any such hypothetically attributed market power to increase the rents extracted from developers and indirectly impose deadweight losses on consumers.

In the classic scenario of platform lock-in, a three-step sequence is observed: (1) a new firm acquires a high market share in a race for platform dominance, (2) the platform winner is protected by network effects and switching costs, and (3) the entrenched platform “exploits” consumers by inflating prices (or imposing other adverse terms) to capture monopoly rents. This economic model is reflected in the case law on lock-in claims, which typically requires that the plaintiff identify an adverse change by the defendant in pricing or other terms after users were allegedly locked-in.

The history of the App Store does not conform to this model. Apple has always assessed a 30% fee and the same is true of every other leading distributor of games for the mobile and PC market, including Google Play Store, App Store’s rival in the mobile market, and Steam, the dominant distributor of video games in the PC market. This long-standing market practice suggests that the 30% fee is most likely motivated by an efficiency-driven business motivation, rather than seeking to entrench a monopoly position that Apple did not enjoy when the practice was first adopted. That is: even if Apple is deemed to be a “monopolist” for Section 2 purposes, it is not taking any “illegitimate” actions that could constitute monopolization or attempted monopolization.

The logic of the 70/30 split

Uncovering the business logic behind the 70/30 split in the app distribution market is not too difficult.

The 30% fee appears to be a low transaction-cost practice that enables the distributor to fund a variety of services, including app development tools, marketing support, and security and privacy protections, all of which are supplied at no separately priced fee and therefore do not require service-by-service negotiation and renegotiation. The same rationale credibly applies to the integrated payment processing services that Apple supplies for purposes of in-app purchases.

These services deliver significant value and would otherwise be difficult to replicate cost-effectively, protect the App Store’s valuable stock of brand capital (which yields positive spillovers for app developers on the site), and lower the costs of joining and participating in the App Store. Additionally, the 30% fee cross-subsidizes the delivery of these services to the approximately 80% of apps on the App Store that are ad-based and for which no fee is assessed, which in turn lowers entry costs and expands the number and variety of product options for platform users. These would all seem to be attractive outcomes from a competition policy perspective.

Epic’s objection

Epic would object to this line of argument by observing that it only charges a 12% fee to distribute other developers’ games on its own Epic Games Store.

Yet Epic’s lower fee is reportedly conditioned, at least in some cases, on the developer offering the game exclusively on the Epic Games Store for a certain period of time. Moreover, the services provided on the Epic Games Store may not be comparable to the extensive suite of services provided on the App Store and other leading distributors that follow the 30% standard. Additionally, the user base a developer can expect to access through the Epic Games Store is in all likelihood substantially smaller than the audience that can be reached through the App Store and other leading app and game distributors, which is then reflected in the higher fees charged by those platforms.

Hence, even the large fee differential may simply reflect the higher services and larger audiences available on the App Store, Google Play Store and other leading platforms, as compared to the Epic Games Store, rather than the unilateral extraction of market rents at developers’ and consumers’ expense.

Antitrust is about efficiency, not distribution

Epic says the standard 70/30 split between game publishers and app distributors is “excessive” while others argue that it is historically outdated.

Neither of these are credible antitrust arguments. Renegotiating the division of economic surplus between game suppliers and distributors is not the concern of antitrust law, which (as properly defined) should only take an interest if either (i) Apple is colluding on the 30% fee with other app distributors, or (ii) Apple is taking steps that preclude entry into the apps distribution market and lack any legitimate business justification. No one claims evidence for the former possibility and, without further evidence, the latter possibility is not especially compelling given the uniform use of the 70/30 split across the industry (which, as noted, can be derived from a related set of credible efficiency justifications). It is even less compelling in the face of evidence that output is rapidly accelerating, not declining, in the gaming app market: in the first half of 2020, approximately 24,500 new games were added to the App Store.

If this conclusion is right, then Epic’s lawsuit against Apple does not seem to have much to do with the public interest in preserving market competition.

But it clearly has much to do with the business interest of an input supplier in minimizing its distribution costs and maximizing its profit margin. That category includes not only Epic Games but Tencent, the world’s largest video game publisher and the holder of a 40% equity stake in Epic. Tencent also owns Riot Games (the publisher of “League of Legends”), an 84% stake in Supercell (the publisher of “Clash of Clans”), and a 5% stake in Activision Blizzard (the publisher of “Call of Duty”). It is unclear how an antitrust claim that, if successful, would simply redistribute economic value from leading game distributors to leading game developers has any necessary relevance to antitrust’s objective to promote consumer welfare.

The prequel: Apple v. Qualcomm

Ironically (and, as Dirk Auer has similarly observed), there is a symmetry between Epic’s claims against Apple and the claims previously pursued by Apple (and, concurrently, the Federal Trade Commission) against Qualcomm.

In that litigation, Apple contested the terms of the licensing arrangements under which Qualcomm made available its wireless communications patents to Apple (more precisely, Foxconn, Apple’s contract manufacturer), arguing that the terms were incompatible with Qualcomm’s commitment to “fair, reasonable and nondiscriminatory” (“FRAND”) licensing of its “standard-essential” patents (“SEPs”). Like Epic v. Apple, Apple v. Qualcomm was fundamentally a contract dispute, with the difference that Apple was in the position of a third-party beneficiary of the commitment that Qualcomm had made to the governing standard-setting organization. Like Epic, Apple sought to recharacterize this contractual dispute as an antitrust question, arguing that Qualcomm’s licensing practices constituted anticompetitive actions to “monopolize” the market for smartphone modem chipsets.

Theory meets evidence

The rhetoric used by Epic in its complaint echoes the rhetoric used by Apple in its briefs and other filings in the Qualcomm litigation. Apple (like the FTC) had argued that Qualcomm imposed a “tax” on competitors by requiring that any purchaser of Qualcomm’s chipsets concurrently enter into a license for Qualcomm’s SEP portfolio relating to 3G and 4G/LTE-enabled mobile communications devices.

Yet the history and performance of the mobile communications market simply did not track Apple’s (and the FTC’s continuing) characterization of Qualcomm’s licensing fee as a socially costly drag on market growth and, by implication, consumer welfare.

If this assertion had merit, then the decades-old wireless market should have exhibited a dismal history of increasing prices, slow user adoption and lagging innovation. In actuality, the wireless market since its inception has grown relentlessly, characterized by declining quality-adjusted prices, expanding output, relentless innovation, and rapid adoption across a broad range of income segments.

Given this compelling real-world evidence, the only remaining line of argument (still being pursued by the FTC) that could justify antitrust intervention is a theoretical conjecture that the wireless market might have grown even faster under some alternative IP licensing arrangement. This assertion rests precariously on the speculative assumption that any such arrangement would have induced the same or higher level of aggregate investment in innovation and commercialization activities. That fragile chain of “what if” arguments hardly seems a sound basis on which to rewrite the legal infrastructure behind the billions of dollars of licensing transactions that support the economically thriving smartphone market and the even larger ecosystem that has grown around it.

Antitrust litigation as business strategy

Given the absence of compelling evidence of competitive harm from Qualcomm’s allegedly anticompetitive licensing practices, Apple’s litigation would seem to be best interpreted as an economically rational attempt by a downstream producer to renegotiate a downward adjustment in the fees paid to an upstream supplier of critical technology inputs. (In fact, those are precisely the terms on which Qualcomm in 2015 settled the antitrust action brought against it by China’s competition regulator, to the obvious benefit of local device producers.) The Epic Games litigation is a mirror image fact pattern in which an upstream supplier of content inputs seeks to deploy antitrust law strategically for the purposes of minimizing the fees it pays to a leading downstream distributor.

Both litigations suffer from the same flaw. Private interests concerning the division of an existing economic value stream—a business question that is matter of indifference from an efficiency perspective—are erroneously (or, at least, reflexively) conflated with the public interest in preserving the free play of competitive forces that maximizes the size of the economic value stream.

Conclusion: Remaking the case for “narrow” antitrust

The Epic v. Apple and Apple v. Qualcomm disputes illustrate the unproductive rent-seeking outcomes to which antitrust law will inevitably be led if, as is being widely advocated, it is decoupled from its well-established foundation in promoting consumer welfare—and not competitor welfare.

Some proponents of a more expansive approach to antitrust enforcement are convinced that expanding the law’s scope of application will improve market efficiency by providing greater latitude for expert regulators and courts to reengineer market structures to the public benefit. Yet any substitution of top-down expert wisdom for the bottom-up trial-and-error process of market competition can easily yield “false positives” in which courts and regulators take actions that counterproductively intervene in markets that are already operating under reasonably competitive conditions. Additionally, an overly expansive approach toward the scope of antitrust law will induce private firms to shift resources toward securing advantages over competitors through lobbying and litigation, rather than seeking to win the race to deliver lower-cost and higher-quality products and services. Neither outcome promotes the public’s interest in a competitive marketplace.

What is a search engine?

Dirk Auer —  21 October 2020

What is a search engine? This might seem like an innocuous question, but it lies at the heart of the US Department of Justice and state Attorneys’ General antitrust complaint against Google, as well as the European Commission’s Google Search and Android decisions. It is also central to a report published by the UK’s Competition & Markets Authority (“CMA”). To varying degrees, all of these proceedings are premised on the assumption that Google enjoys a monopoly/dominant position over online search. But things are not quite this simple. 

Despite years of competition decisions and policy discussions, there are still many unanswered questions concerning the operation of search markets. For example, it is still unclear exactly which services compete against Google Search, and how this might evolve in the near future. Likewise, there has only been limited scholarly discussion as to how a search engine monopoly would exert its market power. In other words, what does a restriction of output look like on a search platform — particularly on the user side

Answering these questions will be essential if authorities wish to successfully bring an antitrust suit against Google for conduct involving search. Indeed, as things stand, these uncertainties greatly complicate efforts (i) to rigorously define the relevant market(s) in which Google Search operates, (ii) to identify potential anticompetitive effects, and (iii) to apply the quantitative tools that usually underpin antitrust proceedings.

In short, as explained below, antitrust authorities and other plaintiffs have their work cut out if they are to prevail in court.

Consumers demand information 

For a start, identifying the competitive constraints faced by Google presents authorities and plaintiffs with an important challenge.

Even proponents of antitrust intervention recognize that the market for search is complex. For instance, the DOJ and state AGs argue that Google dominates a narrow market for “general search services” — as opposed to specialized search services, content sites, social networks, and online marketplaces, etc. The EU Commission reached the same conclusion in its Google Search decision. Finally, commenting on the CMA’s online advertising report, Fiona Scott Morton and David Dinielli argue that: 

General search is a relevant market […]

In this way, an individual specialized search engine competes with a small fraction of what the Google search engine does, because a user could employ either for one specific type of search. The CMA concludes that, from the consumer standpoint, a specialized search engine exerts only a limited competitive constraint on Google.

(Note that the CMA stressed that it did not perform a market definition exercise: “We have not carried out a formal market definition assessment, but have instead looked at competitive constraints across the sector…”).

In other words, the above critics recognize that search engines are merely tools that can serve multiple functions, and that competitive constraints may be different for some of these. But this has wider ramifications that policymakers have so far overlooked. 

When quizzed about his involvement with Neuralink (a company working on implantable brain–machine interfaces), Elon Musk famously argued that human beings already share a near-symbiotic relationship with machines (a point already made by others):

The purpose of Neuralink [is] to create a high-bandwidth interface to the brain such that we can be symbiotic with AI. […] Because we have a bandwidth problem. You just can’t communicate through your fingers. It’s just too slow.

Commentators were quick to spot this implications of this technology for the search industry:

Imagine a world when humans would no longer require a device to search for answers on the internet, you just have to think of something and you get the answer straight in your head from the internet.

As things stand, this example still belongs to the realm of sci-fi. But it neatly illustrates a critical feature of the search industry. 

Search engines are just the latest iteration (but certainly not the last) of technology that enables human beings to access specific pieces of information more rapidly. Before the advent of online search, consumers used phone directories, paper maps, encyclopedias, and other tools to find the information they were looking for. They would read newspapers and watch television to know the weather forecast. They went to public libraries to undertake research projects (some still do), etc.

And, in some respects, the search engine is already obsolete for many of these uses. For instance, virtual assistants like Alexa, Siri, Cortana and Google’s own Google Assistant offering can perform many functions that were previously the preserve of search engines: checking the weather, finding addresses and asking for directions, looking up recipes, answering general knowledge questions, finding goods online, etc. Granted, these virtual assistants partly rely on existing search engines to complete tasks. However, Google is much less dominant in this space, and search engines are not the sole source on which virtual assistants rely to generate results. Amazon’s Alexa provides a fitting example (here and here).

Along similar lines, it has been widely reported that 60% of online shoppers start their search on Amazon, while only 26% opt for Google Search. In other words, Amazon’s ability to rapidly show users the product they are looking for somewhat alleviates the need for a general search engine. In turn, this certainly constrains Google’s behavior to some extent. And much of the same applies to other websites that provide a specific type of content (think of Twitter, LinkedIn, Tripadvisor, Booking.com, etc.)

Finally, it is also revealing that the most common searches on Google are, in all likelihood, made to reach other websites — a function for which competition is literally endless:

The upshot is that Google Search and other search engines perform a bundle of functions. Most of these can be done via alternative means, and this will increasingly be the case as technology continues to advance. 

This is all the more important given that the vast majority of search engine revenue derives from roughly 30 percent of search terms (notably those that are linked to product searches). The remaining search terms are effectively a loss leader. And these profitable searches also happen to be those where competition from alternative means is, in all likelihood, the strongest (this includes competition from online retail platforms, and online travel agents like Booking.com or Kayak, but also from referral sites, direct marketing, and offline sources). In turn, this undermines US plaintiffs’ claims that Google faces little competition from rivals like Amazon, because they don’t compete for the entirety of Google’s search results (in other words, Google might face strong competition for the most valuable ads):

108. […] This market share understates Google’s market power in search advertising because many search-advertising competitors offer only specialized search ads and thus compete with Google only in a limited portion of the market. 

Critics might mistakenly take the above for an argument that Google has no market power because competition is “just a click away”. But the point is more subtle, and has important implications as far as market definition is concerned.

Authorities should not define the search market by arguing that no other rival is quite like Google (or one if its rivals) — as the DOJ and state AGs did in their complaint:

90. Other search tools, platforms, and sources of information are not reasonable substitutes for general search services. Offline and online resources, such as books, publisher websites, social media platforms, and specialized search providers such as Amazon, Expedia, or Yelp, do not offer consumers the same breadth of information or convenience. These resources are not “one-stop shops” and cannot respond to all types of consumer queries, particularly navigational queries. Few consumers would find alternative sources a suitable substitute for general search services. Thus, there are no reasonable substitutes for general search services, and a general search service monopolist would be able to maintain quality below the level that would prevail in a competitive market. 

And as the EU Commission did in the Google Search decision:

(162) For the reasons set out below, there is, however, limited demand side substitutability between general search services and other online services. […]

(163) There is limited substitutability between general search services and content sites. […]

(166) There is also limited substitutability between general search services and specialised search services. […]

(178) There is also limited substitutability between general search services and social networking sites.

Ad absurdum, if consumers suddenly decided to access information via other means, Google could be the only firm to provide general search results and yet have absolutely no market power. 

Take the example of Yahoo: Despite arguably remaining the most successful “web directory”, it likely lost any market power that it had when Google launched a superior — and significantly more successful — type of search engine. Google Search may not have provided a complete, literal directory of the web (as did Yahoo), but it offered users faster access to the information they wanted. In short, the Yahoo example shows that being unique is not equivalent to having market power. Accordingly, any market definition exercise that merely focuses on the idiosyncrasies of firms is likely to overstate their actual market power. 

Given what precedes, the question that authorities should ask is thus whether Google Search (or another search engine) performs so many unique functions that it may be in a position to restrict output. So far, no one appears to have convincingly answered this question.

Similar uncertainties surround the question of how a search engine might restrict output, especially on the user side of the search market. Accordingly, authorities will struggle to produce evidence (i) the Google has market power, especially on the user side of the market, and (ii) that its behavior has anticompetitive effects.

Consider the following:

The SSNIP test (which is the standard method of defining markets in antitrust proceedings) is inapplicable to the consumer side of search platforms. Indeed, it is simply impossible to apply a hypothetical 10% price increase to goods that are given away for free.

This raises a deeper question: how would a search engine exercise its market power? 

For a start, it seems unlikely that it would start charging fees to its users. For instance, empirical research pertaining to the magazine industry (also an ad-based two-sided market) suggests that increased concentration does not lead to higher magazine prices. Minjae Song notably finds that:

Taking the advantage of having structural models for both sides, I calculate equilibrium outcomes for hypothetical ownership structures. Results show that when the market becomes more concentrated, copy prices do not necessarily increase as magazines try to attract more readers.

It is also far from certain that a dominant search engine would necessarily increase the amount of adverts it displays. To the contrary, market power on the advertising side of the platform might lead search engines to decrease the number of advertising slots that are available (i.e. reducing advertising output), thus showing less adverts to users. 

Finally, it is not obvious that market power would lead search engines to significantly degrade their product (as this could ultimately hurt ad revenue). For example, empirical research by Avi Goldfarb and Catherine Tucker suggests that there is some limit to the type of adverts that search engines could profitably impose upon consumers. They notably find that ads that are both obtrusive and targeted decrease subsequent purchases:

Ads that match both website content and are obtrusive do worse at increasing purchase intent than ads that do only one or the other. This failure appears to be related to privacy concerns: the negative effect of combining targeting with obtrusiveness is strongest for people who refuse to give their income and for categories where privacy matters most.

The preceding paragraphs find some support in the theoretical literature on two-sided markets literature, which suggests that competition on the user side of search engines is likely to be particularly intense and beneficial to consumers (because they are more likely to single-home than advertisers, and because each additional user creates a positive externality on the advertising side of the market). For instance, Jean Charles Rochet and Jean Tirole find that:

The single-homing side receives a large share of the joint surplus, while the multi-homing one receives a small share.

This is just a restatement of Mark Armstrong’s “competitive bottlenecks” theory:

Here, if it wishes to interact with an agent on the single-homing side, the multi-homing side has no choice but to deal with that agent’s chosen platform. Thus, platforms have monopoly power over providing access to their single-homing customers for the multi-homing side. This monopoly power naturally leads to high prices being charged to the multi-homing side, and there will be too few agents on this side being served from a social point of view (Proposition 4). By contrast, platforms do have to compete for the single-homing agents, and high profits generated from the multi-homing side are to a large extent passed on to the single-homing side in the form of low prices (or even zero prices).

All of this is not to suggest that Google Search has no market power, or that monopoly is necessarily less problematic in the search engine industry than in other markets. 

Instead, the argument is that analyzing competition on the user side of search platforms is unlikely to yield dispositive evidence of market power or anticompetitive effects. This is because market power is hard to measure on this side of the market, and because even a monopoly platform might not significantly restrict user output. 

That might explain why the DOJ and state AGs analysis of anticompetitive effects is so limited. Take the following paragraph (provided without further supporting evidence):

167. By restricting competition in general search services, Google’s conduct has harmed consumers by reducing the quality of general search services (including dimensions such as privacy, data protection, and use of consumer data), lessening choice in general search services, and impeding innovation. 

Given these inherent difficulties, antitrust investigators would do better to focus on the side of those platforms where mainstream IO tools are much easier to apply and where a dominant search engine would likely restrict output: the advertising market. Not only is it the market where search engines are most likely to exert their market power (thus creating a deadweight loss), but — because it involves monetary transactions — this side of the market lends itself to the application of traditional antitrust tools.  

Looking at the right side of the market

Finally, and unfortunately for Google’s critics, available evidence suggests that its position on the (online) advertising market might not meet the requirements necessary to bring a monopolization case (at least in the US).

For a start, online advertising appears to exhibit the prima facie signs of a competitive market. As Geoffrey Manne, Sam Bowman and Eric Fruits have argued:

Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.

Second, empirical research suggests that the market might need to be widened to include offline advertising. For instance, Avi Goldfarb and Catherine Tucker show that there can be important substitution effects between online and offline advertising channels:

Using data on the advertising prices paid by lawyers for 139 Google search terms in 195 locations, we exploit a natural experiment in “ambulance-chaser” regulations across states. When lawyers cannot contact clients by mail, advertising prices per click for search engine advertisements are 5%–7% higher. Therefore, online advertising substitutes for offline advertising.

Of course, a careful examination of the advertising industry could also lead authorities to define a narrower relevant market. For example, the DOJ and state AG complaint argued that Google dominated the “search advertising” market:

97. Search advertising in the United States is a relevant antitrust market. The search advertising market consists of all types of ads generated in response to online search queries, including general search text ads (offered by general search engines such as Google and Bing) […] and other, specialized search ads (offered by general search engines and specialized search providers such as Amazon, Expedia, or Yelp). 

Likewise, the European Commission concluded that Google dominated the market for “online search advertising” in the AdSense case (though the full decision has not yet been made public). Finally, the CMA’s online platforms report found that display and search advertising belonged to separate markets. 

But these are empirical questions that could dispositively be answered by applying traditional antitrust tools, such as the SSNIP test. And yet, there is no indication that the authorities behind the US complaint undertook this type of empirical analysis (and until its AdSense decision is made public, it is not clear that the EU Commission did so either). Accordingly, there is no guarantee that US courts will go along with the DOJ and state AGs’ findings.

In short, it is far from certain that Google currently enjoys an advertising monopoly, especially if the market is defined more broadly than that for “search advertising” (or the even narrower market for “General Search Text Advertising”). 

Concluding remarks

The preceding paragraphs have argued that a successful antitrust case against Google is anything but a foregone conclusion. In order to successfully bring a suit, authorities would notably need to figure out just what market it is that Google is monopolizing. In turn, that would require a finer understanding of what competition, and monopoly, look like in the search and advertising industries.

Apple’s legal team will be relieved that “you reap what you sow” is just a proverb. After a long-running antitrust battle against Qualcomm unsurprisingly ended in failure, Apple now faces antitrust accusations of its own (most notably from Epic Games). Somewhat paradoxically, this turn of events might cause Apple to see its previous defeat in a new light. Indeed, the well-established antitrust principles that scuppered Apple’s challenge against Qualcomm will now be the rock upon which it builds its legal defense.

But while Apple’s reversal of fortunes might seem anecdotal, it neatly illustrates a fundamental – and often overlooked – principle of antitrust policy: Antitrust law is about maximizing consumer welfare. Accordingly, the allocation of surplus between two companies is only incidentally relevant to antitrust proceedings, and it certainly is not a goal in and of itself. In other words, antitrust law is not about protecting David from Goliath.

Jockeying over the distribution of surplus

Or at least that is the theory. In practice, however, most antitrust cases are but small parts of much wider battles where corporations use courts and regulators in order to jockey for market position and/or tilt the distribution of surplus in their favor. The Microsoft competition suits brought by the DOJ and the European commission (in the EU and US) partly originated from complaints, and lobbying, by Sun Microsystems, Novell, and Netscape. Likewise, the European Commission’s case against Google was prompted by accusations from Microsoft and Oracle, among others. The European Intel case was initiated following a complaint by AMD. The list goes on.

The last couple of years have witnessed a proliferation of antitrust suits that are emblematic of this type of power tussle. For instance, Apple has been notoriously industrious in using the court system to lower the royalties that it pays to Qualcomm for LTE chips. One of the focal points of Apple’s discontent was Qualcomm’s policy of basing royalties on the end-price of devices (Qualcomm charged iPhone manufacturers a 5% royalty rate on their handset sales – and Apple received further rebates):

“The whole idea of a percentage of the cost of the phone didn’t make sense to us,” [Apple COO Jeff Williams] said. “It struck at our very core of fairness. At the time we were making something really really different.”

This pricing dispute not only gave rise to high-profile court cases, it also led Apple to lobby Standard Developing Organizations (“SDOs”) in a partly successful attempt to make them amend their patent policies, so as to prevent this type of pricing. 

However, in a highly ironic turn of events, Apple now finds itself on the receiving end of strikingly similar allegations. At issue is the 30% commission that Apple charges for in app purchases on the iPhone and iPad. These “high” commissions led several companies to lodge complaints with competition authorities (Spotify and Facebook, in the EU) and file antitrust suits against Apple (Epic Games, in the US).

Of course, these complaints are couched in more sophisticated, and antitrust-relevant, reasoning. But that doesn’t alter the fact that these disputes are ultimately driven by firms trying to tilt the allocation of surplus in their favor (for a more detailed explanation, see Apple and Qualcomm).

Pushback from courts: The Qualcomm case

Against this backdrop, a string of recent cases sends a clear message to would-be plaintiffs: antitrust courts will not be drawn into rent allocation disputes that have no bearing on consumer welfare. 

The best example of this judicial trend is Qualcomm’s victory before the United States Court of Appeal for the 9th Circuit. The case centered on the royalties that Qualcomm charged to OEMs for its Standard Essential Patents (SEPs). Both the district court and the FTC found that Qualcomm had deployed a series of tactics (rebates, refusals to deal, etc) that enabled it to circumvent its FRAND pledges. 

However, the Court of Appeal was not convinced. It failed to find any consumer harm, or recognizable antitrust infringement. Instead, it held that the dispute at hand was essentially a matter of contract law:

To the extent Qualcomm has breached any of its FRAND commitments, a conclusion we need not and do not reach, the remedy for such a breach lies in contract and patent law. 

This is not surprising. From the outset, numerous critics pointed that the case lied well beyond the narrow confines of antitrust law. The scathing dissenting statement written by Commissioner Maureen Olhaussen is revealing:

[I]n the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections. 

In reaching its conclusion, the Court notably rejected the notion that SEP royalties should be systematically based upon the “Smallest Saleable Patent Practicing Unit” (or SSPPU):

Even if we accept that the modem chip in a cellphone is the cellphone’s SSPPU, the district court’s analysis is still fundamentally flawed. No court has held that the SSPPU concept is a per se rule for “reasonable royalty” calculations; instead, the concept is used as a tool in jury cases to minimize potential jury confusion when the jury is weighing complex expert testimony about patent damages.

Similarly, it saw no objection to Qualcomm licensing its technology at the OEM level (rather than the component level):

Qualcomm’s rationale for “switching” to OEM-level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,” the second element of the Aspen Skiing exception. Aerotec Int’l, 836 F.3d at 1184 (internal quotation marks and citation omitted). Instead, Qualcomm responded to the change in patent-exhaustion law by choosing the path that was “far more lucrative,” both in the short term and the long term, regardless of any impacts on competition. 

Finally, the Court concluded that a firm breaching its FRAND pledges did not automatically amount to anticompetitive conduct: 

We decline to adopt a theory of antitrust liability that would presume anticompetitive conduct any time a company could not prove that the “fair value” of its SEP portfolios corresponds to the prices the market appears willing to pay for those SEPs in the form of licensing royalty rates.

Taken together, these findings paint a very clear picture. The Qualcomm Court repeatedly rejected the radical idea that US antitrust law should concern itself with the prices charged by monopolists — as opposed to practices that allow firms to illegally acquire or maintain a monopoly position. The words of Learned Hand and those of Antonin Scalia (respectively, below) loom large:

The successful competitor, having been urged to compete, must not be turned upon when he wins. 

And,

To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Other courts (both in the US and abroad) have reached similar conclusions

For instance, a district court in Texas dismissed a suit brought by Continental Automotive Systems (which supplies electronic systems to the automotive industry) against a group of SEP holders. 

Continental challenged the patent holders’ decision to license their technology at the vehicle rather than component level (the allegation is very similar to the FTC’s complaint that Qualcomm licensed its SEPs at the OEM, rather than chipset level). However, following a forceful intervention by the DOJ, the Court ultimately held that the facts alleged by Continental were not indicative of antitrust injury. It thus dismissed the case.

Likewise, within weeks of the Qualcomm and Continental decisions, the UK Supreme court also ruled in favor of SEP holders. In its Unwired Planet ruling, the Court concluded that discriminatory licenses did not automatically infringe competition law (even though they might breach a firm’s contractual obligations):

[I]t cannot be said that there is any general presumption that differential pricing for licensees is problematic in terms of the public or private interests at stake.

In reaching this conclusion, the UK Supreme Court emphasized that the determination of whether licenses were FRAND, or not, was first and foremost a matter of contract law. In the case at hand, the most important guide to making this determination were the internal rules of the relevant SDO (as opposed to competition case law):

Since price discrimination is the norm as a matter of licensing practice and may promote objectives which the ETSI regime is intended to promote (such as innovation and consumer welfare), it would have required far clearer language in the ETSI FRAND undertaking to indicate an intention to impose the more strict, “hard-edged” non-discrimination obligation for which Huawei contends. Further, in view of the prevalence of competition laws in the major economies around the world, it is to be expected that any anti-competitive effects from differential pricing would be most appropriately addressed by those laws

All of this ultimately led the Court to rule in favor of Unwired Planet, thus dismissing Huawei’s claims that it had infringed competition law by breaching its FRAND pledges. 

In short, courts and antitrust authorities on both sides of the Atlantic have repeatedly, and unambiguously, concluded that pricing disputes (albeit in the specific context of technological standards) are generally a matter of contract law. Antitrust/competition law intercedes only when unfair/excessive/discriminatory prices are both caused by anticompetitive behavior and result in anticompetitive injury.

Apple’s Loss is… Apple’s gain.

Readers might wonder how the above cases relate to Apple’s app store. But, on closer inspection the parallels are numerous. As explained above, courts have repeatedly stressed that antitrust enforcement should not concern itself with the allocation of surplus between commercial partners. Yet that is precisely what Epic Game’s suit against Apple is all about.

Indeed, Epic’s central claim is not that it is somehow foreclosed from Apple’s App Store (for example, because Apple might have agreed to exclusively distribute the games of one of Epic’s rivals). Instead, all of its objections are down to the fact that it would like to access Apple’s store under more favorable terms:

Apple’s conduct denies developers the choice of how best to distribute their apps. Developers are barred from reaching over one billion iOS users unless they go through Apple’s App Store, and on Apple’s terms. […]

Thus, developers are dependent on Apple’s noblesse oblige, as Apple may deny access to the App Store, change the terms of access, or alter the tax it imposes on developers, all in its sole discretion and on the commercially devastating threat of the developer losing access to the entire iOS userbase. […]

By imposing its 30% tax, Apple necessarily forces developers to suffer lower profits, reduce the quantity or quality of their apps, raise prices to consumers, or some combination of the three.

And the parallels with the Qualcomm litigation do not stop there. Epic is effectively asking courts to make Apple monetize its platform at a different level than the one that it chose to maximize its profits (no more monetization at the app store level). Similarly, Epic Games omits any suggestion of profit sacrifice on the part of Apple — even though it is a critical element of most unilateral conduct theories of harm. Finally, Epic is challenging conduct that is both the industry norm and emerged in a highly competitive setting.

In short, all of Epic’s allegations are about monopoly prices, not monopoly maintenance or monopolization. Accordingly, just as the SEP cases discussed above were plainly beyond the outer bounds of antitrust enforcement (something that the DOJ repeatedly stressed with regard to the Qualcomm case), so too is the current wave of antitrust litigation against Apple. When all is said and done, Apple might thus be relieved that Qualcomm was victorious in their antitrust confrontation. Indeed, the legal principles that caused its demise against Qualcomm are precisely the ones that will, likely, enable it to prevail against Epic Games.

Speaking about his new book in a ProMarket interview, David Dayen inadvertently captures what is perhaps the essential disconnect between antitrust reformers (populists, neo-Brandeisians, hipsters, whatever you may call them) and those of us who are more comfortable with the antitrust status quo (whatever you may call us). He says: “The antitrust doctrine that we’ve seen over the last 40 years simply does not match the lived experience of people.”

Narratives of Consumer Experience of Markets

This emphasis on “lived experience” runs through Dayen’s antitrust perspective. Citing to Hal Singer’s review of the book, the interview notes that “the heart of Dayen’s book is the personal accounts of ordinary Americans—airline passengers, hospital patients, farmers, and small business owners—attempting to achieve a slice of the American dream and facing insurmountable barriers in the form of unaccountable private monopolies.” As Singer notes in his review, “Dayen’s personalized storytelling, free of any stodgy regression analysis, is more likely to move policymakers” than are traditional economic arguments.

Dayen’s focus on individual narratives — of the consumer’s lived experience — is fundamentally different than the traditional antitrust economist’s perspective on competition and the market. It is worth exploring the differences between the two. The basic argument that I make below is that Dayen is right but also that he misunderstands the purpose of competition in a capitalist economy. A robustly competitive market is a brutal rat race that places each individual on an accelerating treadmill. There is no satiation or satisfaction for the individual consumer in these markets. But it is this very lack of satisfaction, this endless thirst for more, that makes competitive markets so powerful, and ultimately beneficial, for consumers. 

This is the fundamental challenge and paradox of capitalism. Satisfaction requires perspective that most consumers often don’t feel, and that many consumers never will feel. It requires the ability to step off that treadmill occasionally and to look how far society and individual welfare has come, even if individually one feels like they have not moved at all. It requires recognizing that the alternative to an uncomfortable flight to visit family isn’t a comfortable one, but an unaffordable one; that the alternative to low cost, processed foods, isn’t abundant higher-quality food but greater poverty for those who already can least afford food; that the alternative to a startup being beholden to Google’s and Amazon’s terms of service isn’t a market in which they have boundless access to these platforms’ infrastructures, but one in which each startup needs to entirely engineer its own infrastructure. In all of these cases, the fundamental tradeoff is between having something that is less perfect than an imagined ideal of it, and not having it at all

What Dayen refers to as consumers’ “lived experience” is really their “perceived experience.” This is important to how markets work. Competition is driven by consumers’ perception that things could be better (and by entrepreneurs’ perception that they can make it so). This perception is what keeps us on the treadmill. Consumers don’t look to their past generations and say “wow, by nearly every measure my life can be better than theirs with less effort!” They focus on what they don’t have yet, on the seemingly better lives of their contemporaries.

This description of markets may sound grotesquely dehumanizing. To the extent that it really is, this is because we live in a world of scarcity. There will always be tradeoffs and in a literally real way no consumer will ever have everything that she needs, let alone that she wants. 

On the flip side, this is what drives markets to make consumers better off. Consumers’ wants drive producers’ factories and innovators’ minds. There is no supply curve without a demand curve. And consumers are able to satisfy their own needs by becoming producers who work to satisfy the wants and needs of others. 

A Fair Question: Are Markets Worth It?

Dayen’s perspective on this description of markets, shared with his fellow reform-minded anti-antitrust crusaders, is that the typical consumers’ perceived experience of the market demonstrates that markets don’t work — that they have been captured by monopolists seeking to extract every ounce of revenue from each individual consumer. But this is not a story of monopolies. It is more plainly the story of markets. What Dayen identifies as a problem with the markets really is just the markets working as they are supposed to.

If this is just how markets work, it is fair to ask whether they are worth it. Importantly, those of us who answer “yes” need not be blind to or dismissive of concerns such as Dayen’s — to the concerns of the typical consumer. Economists have long recognized that capitalist markets are about allocative efficiency, not distributive efficiency — about making society as a whole as wealthy as possible but not about making sure that that wealth is fairly distributed. 

The antitrust reform movement is driven by advocates who long for a world in which everyone is poorer but feels more equal, as opposed to what they perceive as a world in which a few monopolists are extremely wealthy and everyone else feels poor. Their perception of this as the but-for world is not unreasonable, but it is also not accurate. The better world is the one with thriving, prosperous, markets,in which consumers broadly feel that they share in this prosperity. It may be the case that such a world has some oligopolies and even monopolies — that is what economic efficiency sometimes looks like. 

But those firms’ prosperity need not be adverse to consumers’ experience of the market. The challenging question is how we achieve this outcome. But that is a question of politics and macroeconomic policy, and of corporate social policy. It is a question of national identity, whether consumers’ perception of the economic treadmill can pivot from one of perceived futility to one of recognizing their lived contributions to society. It is one that antitrust law as it exists today contributes to answering, but not one that antitrust law on its own can ever answer.

On the other hand, were we to follow the populists’ lead and turn antitrust into a remedy for the perceived maladies of the market, we would risk the engine that improves consumers’ actual lived experience. The alternative to an antitrust driven by economic analysis and that errs on the side of not disrupting markets in favor of perceived injuries is an antitrust in which markets are beholden to the whims of politicians and enforcement officials. This is a world in which litigation is used by politicians to make it appear they are delivering on impossible promises, in which litigation is used to displace blame for politicians’ policy failures, in which litigation is used to distract from socio-political events entirely unrelated to the market. 

Concerns such as Dayen’s are timeless and not unreasonable. But the reflexive action is not the answer to such concerns. Rather, the response always must be to ask “opposed to what?” What is the but-for world? Here, Dayen and his peers suffer both Type I and Type II errors. They misdiagnose antitrust and non-competitive markets as the cause of their perceived problems. And they are overly confident in their proposed solutions to those problems, not recognizing the real harms that their proposed politicization of antitrust and markets poses.

In an age of antitrust populism on both ends of the political spectrum, federal and state regulators face considerable pressure to deploy the antitrust laws against firms that have dominant market shares. Yet federal case law makes clear that merely winning the race for a market is an insufficient basis for antitrust liability. Rather, any plaintiff must show that the winner either secured or is maintaining its dominant position through practices that go beyond vigorous competition. Any other principle would inhibit the competitive process that the antitrust laws are designed to promote. Federal judges who enjoy life tenure are far more insulated from outside pressures and therefore more likely to demand evidence of anticompetitive practices as a predicate condition for any determination of antitrust liability.

This separation of powers between the executive branch, which prosecutes alleged infractions of the law, and the judicial branch, which polices the prosecutor, is the simple genius behind the divided system of government generally attributed to the eighteenth-century French thinker, Montesquieu. The practical wisdom of this fundamental principle of political design, which runs throughout the U.S. Constitution, can be observed in full force in the current antitrust landscape, in which the federal courts have acted as a bulwark against several contestable enforcement actions by antitrust regulators.

In three headline cases brought by the Department of Justice or the Federal Trade Commission since 2017, the prosecutorial bench has struck out in court. Under the exacting scrutiny of the judiciary, government litigators failed to present sufficient evidence that a dominant firm had engaged in practices that caused, or were likely to cause, significant anticompetitive effects. In each case, these enforcement actions, applauded by policymakers and commentators who tend to follow “big is bad” intuitions, foundered when assessed in light of judicial precedent, the factual record, and the economic principles embedded in modern antitrust law. An ongoing suit, filed by the FTC this year after more than 18 months since the closing of the targeted acquisition, exhibits similar factual and legal infirmities.

Strike 1: The AT&T/Time-Warner Transaction

In response to the announcement of AT&T’s $85.4 billion acquisition of Time Warner, the DOJ filed suit in 2017 to prevent the formation of a dominant provider in home-video distribution that would purportedly deny competitors access to “must-have” content. As I have observed previously, this theory of the case suffered from two fundamental difficulties. 

First, content is an abundant and renewable resource so it is hard to see how AT&T+TW could meaningfully foreclose competitors’ access to this necessary input. Even in the hypothetical case of potentially “must-have” content, it was unclear whether it would be economically rational for post-acquisition AT&T regularly to deny access to other distributors, given that doing so would imply an immediate and significant loss in licensing revenues without any clearly offsetting future gain in revenues from new subscribers.

Second, home-video distribution is a market lapsing rapidly into obsolescence as content monetization shifts from home-based viewing to a streaming environment in which consumers expect “anywhere, everywhere” access. The blockbuster acquisition was probably best understood as a necessary effort to adapt to this new environment (already populated by several major streaming platforms), rather than an otherwise puzzling strategy to spend billions to capture a market on the verge of commercial irrelevance. 

Strike 2: The Sabre/Farelogix Acquisition

In 2019, the DOJ filed suit to block the $360 million acquisition of Farelogix by Sabre, one of three leading airline booking platforms, on the ground that it would substantially lessen competition. The factual basis for this legal diagnosis was unclear. In 2018, Sabre earned approximately $3.9 billion in worldwide revenues, compared to $40 million for Farelogix. Given this drastic difference in market share, and the almost trivial share attributable to Farelogix, it is difficult to fathom how the DOJ could credibly assert that the acquisition “would extinguish a crucial constraint on Sabre’s market power.” 

To use a now much-discussed theory of antitrust liability, it might nonetheless be argued that Farelogix posed a “nascent” competitive threat to the Sabre platform. That is: while Farelogix is small today, it may become big enough tomorrow to pose a threat to Sabre’s market leadership. 

But that theory runs straight into a highly inconvenient fact. Farelogix was founded in 1998 and, during the ensuing two decades, had neither achieved broad adoption of its customized booking technology nor succeeded in offering airlines a viable pathway to bypass the three major intermediary platforms. The proposed acquisition therefore seems best understood as a mutually beneficial transaction in which a smaller (and not very nascent) firm elects to monetize its technology by embedding it in a leading platform that seeks to innovate by acquisition. Robust technology ecosystems do this all the time, efficiently exploiting the natural complementarities between a smaller firm’s “out of the box” innovation with the capital-intensive infrastructure of an incumbent. (Postscript: While the DOJ lost this case in federal court, Sabre elected in May 2020 not to close following similarly puzzling opposition by British competition regulators.) 

Strike 3: FTC v. Qualcomm

The divergence of theories of anticompetitive risk from market realities is vividly illustrated by the landmark suit filed by the FTC in 2017 against Qualcomm. 

The litigation pursued nothing less than a wholesale reengineering of the IP licensing relationships between innovators and implementers that underlie the global smartphone market. Those relationships principally consist of device-level licenses between IP innovators such as Qualcomm and device manufacturers and distributors such as Apple. This structure efficiently collects remuneration from the downstream segment of the supply chain for upstream firms that invest in pushing forward the technology frontier. The FTC thought otherwise and pursued a remedy that would have required Qualcomm to offer licenses to its direct competitors in the chip market and to rewrite its existing licenses with device producers and other intermediate users on a component, rather than device, level. 

Remarkably, these drastic forms of intervention into private-ordering arrangements rested on nothing more than what former FTC Commissioner Maureen Ohlhausen once appropriately called a “possibility theorem.” The FTC deployed a mostly theoretical argument that Qualcomm had extracted an “unreasonably high” royalty that had potentially discouraged innovation, impeded entry into the chip market, and inflated retail prices for consumers. Yet these claims run contrary to all available empirical evidence, which indicates that the mobile wireless device market has exhibited since its inception declining quality-adjusted prices, increasing output, robust entry into the production market, and continuous innovation. The mismatch between the government’s theory of market failure and the actual record of market success over more than two decades challenges the policy wisdom of disrupting hundreds of existing contractual arrangements between IP licensors and licensees in a thriving market. 

The FTC nonetheless secured from the district court a sweeping order that would have had precisely this disruptive effect, including imposing a “duty to deal” that would have required Qualcomm to license directly its competitors in the chip market. The Ninth Circuit stayed the order and, on August 11, 2020, issued an unqualified reversal, stating that the lower court had erroneously conflated “hypercompetitive” (good) with anticompetitive (bad) conduct and observing that “[t]hroughout its analysis, the district court conflated the desire to maximize profits with an intent to ‘destroy competition itself.’” In unusually direct language, the appellate court also observed (as even the FTC had acknowledged on appeal) that the district court’s ruling was incompatible with the Supreme Court’s ruling in Aspen Skiing Co. v. Aspen Highlands Skiing Corp., which strictly limits the circumstances in which a duty to deal can be imposed. In some cases, it appears that additional levels of judicial review are necessary to protect antitrust law against not only administrative but judicial overreach.

Axon v. FTC

For the most explicit illustration of the interface between Montesquieu’s principle of divided government and the risk posed to antitrust law by cases of prosecutorial excess, we can turn to an unusual and ongoing litigation, Axon v. FTC.

The HSR Act and Post-Consummation Merger Challenges

The HSR Act provides regulators with the opportunity to preemptively challenge acquisitions and related transactions on antitrust grounds prior to those transactions having been consummated. Since its enactment in 1976, this statutory innovation has laudably increased dealmakers’ ability to close transactions with a high level of certainty that regulators would not belatedly seek to “unscramble the egg.” While the HSR Act does not foreclose this contingency since regulatory failure to challenge a transaction only indicates current enforcement intentions, it is probably fair to say that M&A dealmakers generally assume that regulators would reverse course only in exceptional circumstances. In turn, the low prospect of after-the-fact regulatory intervention encourages the efficient use of M&A transactions for the purpose of shifting corporate assets to users that value those assets most highly.

The FTC’s Belated Attack on the Axon/Vievu Acquisition

Dealmakers may be revisiting that understanding in the wake of the FTC’s decision in January 2020 to challenge the acquisition of Vievu by Axon, each being a manufacturer of body-worn camera equipment and related data-management software for law enforcement agencies. The acquisition had closed in May 2018 but had not been reported through HSR since it fell well below the reportable deal threshold. Given a total transaction value of $7 million, the passage of more than 18 months since closing, and the insolvency or near-insolvency of the target company, it is far from obvious that the Axon acquisition posed a material competitive risk that merits unsettling expectations that regulators will typically not challenge a consummated transaction, especially in the case of what is a micro-sized nebula in the M&A universe. 

These concerns are heightened by the fact that the FTC suit relies on a debatably narrow definition of the relevant market (body-camera equipment and related “cloud-based” data management software for police departments in large metropolitan areas, rather than a market that encompassed more generally defined categories of body-worn camera equipment, law enforcement agencies, and data management services). Even within this circumscribed market, there are apparently several companies that offer related technologies and an even larger group that could plausibly enter in response to perceived profit opportunities. Despite this contestable legal position, Axon’s court filing states that the FTC offered to settle the suit on stiff terms: Axon must agree to divest itself of the Vievu assets and to license all of Axon’s pre-transaction intellectual property to the buyer of the Vievu assets. This effectively amounts to an opportunistic use of the antitrust merger laws to engage in post-transaction market reengineering, rather than merely blocking an acquisition to maintain the pre-transaction status quo.

Does the FTC Violate the Separation of Powers?

In a provocative strategy, Axon has gone on the offensive and filed suit in federal district court to challenge on constitutional grounds the long-standing internal administrative proceeding through which the FTC’s antitrust claims are initially adjudicated. Unlike the DOJ, the FTC’s first stop in the litigation process (absent settlement) is not a federal district court but an internal proceeding before an administrative law judge (“ALJ”), whose ruling can then be appealed to the Commission. Axon is effectively arguing that this administrative internalization of the judicial function violates the separation of powers principle as implemented in the U.S. Constitution. 

Writing on a clean slate, Axon’s claim is eminently reasonable. The fact that FTC-paid personnel sit on both sides of the internal adjudicative process as prosecutor (the FTC litigation team) and judge (the ALJ and the Commissioners) locates the executive and judicial functions in the hands of a single administrative entity. (To be clear, the Commission’s rulings are appealable to federal court, albeit at significant cost and delay.) In any event, a court presented with Axon’s claim—as of this writing, the Ninth Circuit (taking the case on appeal by Axon)—is not writing on a clean slate and is most likely reluctant to accept a claim that would trigger challenges to the legality of other similarly structured adjudicative processes at other agencies. Nonetheless, Axon’s argument does raise important concerns as to whether certain elements of the FTC’s adjudicative mechanism (as distinguished from the very existence of that mechanism) could be refined to mitigate the conflicts of interest that arise in its current form.

Conclusion

Antitrust vigilance certainly has its place, but it also has its limits. Given the aspirational language of the antitrust statutes and the largely unlimited structural remedies to which an antitrust litigation can lead, there is an inevitable risk of prosecutorial overreach that can betray the fundamental objective to protect consumer welfare. Applied to the antitrust context, the separation of powers principle mitigates this risk by subjecting enforcement actions to judicial examination, which is in turn disciplined by the constraints of appellate review and stare decisis. A rich body of federal case law implements this review function by anchoring antitrust in a decisionmaking framework that promotes the public’s interest in deterring business practices that endanger the competitive process behind a market-based economy. As illustrated by the recent string of failed antitrust suits, and the ongoing FTC litigation against Axon, that same decisionmaking framework can also protect the competitive process against regulatory practices that pose this same type of risk.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

While much of the world of competition policy has focused on mergers in the COVID-19 era. Some observers see mergers as one way of saving distressed but valuable firms. Others have called for a merger moratorium out of fear that more mergers will lead to increased concentration and market power. In the meantime, there has been a growing push for increased nationalization of a wide range of businesses and industries.

In most cases, the call for a government takeover is not a reaction to the public health and economic crises associated with coronavirus. Instead, COVID-19 is a convenient excuse to pursue long sought after policies.

Last year, well before the pandemic, New York mayor Bill de Blasio called for a government takeover of electrical grid operator ConEd because he was upset over blackouts during a heatwave. Earlier that year, he threatened to confiscate housing units from private landlords, “we will seize their buildings, and we will put them in the hands of a community nonprofit that will treat tenants with the respect they deserve.”

With that sort of track record, it should come as no surprise the mayor proposed a government takeover of key industries to address COVID-19: “This is a case for a nationalization, literally a nationalization, of crucial factories and industries that could produce the medical supplies to prepare this country for what we need.” Dana Brown, director of The Next System Project at The Democracy Collaborative, agrees, “We should nationalize what remains of the American vaccine industry now, thereby assuring that any coronavirus vaccines produced can be made as widely available and as inexpensive soon as possible.” 

Dan Sullivan in the American Prospect suggests the U.S. should nationalize all the airlines. Some have gone so far as calling for nationalization of the U.S. oil industry.

On the one hand, it’s clear that de Blasio and Brown have no confidence in the price system to efficiently allocate resources. Alternatively, they may have overconfidence in the political/bureaucratic system to efficiently, and “equitably,” distribute resources. On the other hand, as Daniel Takash points out in an earlier post, both pharmaceuticals and oil are relatively unpopular industries with many Americans, in which case the threat of a government takeover has a big dose of populist score settling:

Yet last year a Gallup poll found that of 25 major industries, the pharmaceutical industry was the most unpopular–trailing behind fossil fuels, lawyers, and even the federal government. 

In the early days of the pandemic, France’s finance minister Bruno Le Maire promised to protect “big French companies.” The minister identified a range of actions under consideration: “That can be done by recapitalization, that can be done by taking a stake, I can even use the term nationalization if necessary.” While he did not mention any specific companies, it’s been speculated Air France KLM may be a target.

The Italian government is expected to nationalize Alitalia soon. The airline has been in state administration since May 2017, and the Italian government will have 100% control of the airline by June. Last week, the German government took a 20% stake in Lufthansa, in what has been characterized as a “temporary partial nationalization.” In Canada, Prime Minister Justin Trudeau has been coy about speculation that the government might nationalize Air Canada. 

Obviously, these takeovers have “bailout” written all over them, and bailouts have their own anticompetitive consequences that can be worse than those associated with mergers. For example, RyanAir announced it will contest the aid package for Lufthansa. RyanAir chief executive Michael O’Leary claims the aid will allow Lufthansa to “engage in below-cost selling” and make it harder for RyanAir and its rival low-cost carrier EasyJet to compete. 

There is also a bit of a “national champion” aspect to the takeovers. Each of the potential targets are (or were) considered their nation’s flagship airline. World Bank economists Tanja Goodwin and Georgiana Pop highlight the risk of nationalization harming competition: 

These [sic] should avoid rescuing firms that were already failing. …  But governments should also refrain from engaging in production or service delivery in industries that can be served by the private sector. The role of SOEs [state owned enterprises] should be assessed in order to ensure that bailout packages are not exclusively and unnecessarily favoring a dominant SOE.

To be sure, COVID-19 related mergers could raise the specter of increased market power post-pandemic. But, this risk must be balanced against the risks posed by a merger moratorium. These include the risk of widespread bankruptcies (that’s another post) and/or the possibility of nationalization of firms and industries. Either option can reduce competition which can bring harm to consumers, employees, and suppliers.