Archives For merger review

Federal Trade Commission (FTC) Chair Lina Khan’s Sept. 22 memorandum to FTC commissioners and staff—entitled “Vision and Priorities for the FTC” (VP Memo)—offers valuable insights into the chair’s strategy and policy agenda for the commission. Unfortunately, it lacks an appreciation for the limits of antitrust and consumer-protection law; it also would have benefited from greater regulatory humility. After summarizing the VP Memo’s key sections, I set forth four key takeaways from this rather unusual missive.

Introduction

The VP Memo begins appropriately enough, with praise for commission staff and a call to focus on key FTC strategic priorities and operational objectives. So far, so good. Regrettably, the introductory section is the memo’s strongest feature.

Strategic Approach

The VP Memo’s first substantive section, which lays out Khan’s strategic approach, raises questions that require further clarification.

This section is long on glittering generalities. First, it begins with the need to take a “holistic approach” that recognizes law violations harm workers and independent businesses, as well as consumers. Legal violations that reflect “power asymmetries” and harm to “marginalized communities” are emphasized, but not defined. Are new enforcement standards to supplement or displace consumer welfare enhancement being proposed?

Second, similar ambiguity surrounds the need to target enforcement efforts toward “root causes” of unlawful conduct, rather than “one-off effects.” Root causes are said to involve “structural incentives that enable unlawful conduct” (such as conflicts of interest, business models, or structural dominance), as well as “upstream” examination of firms that profit from such conduct. How these observations may be “operationalized” into case-selection criteria (and why these observations are superior to alternative means for spotting illegal behavior) is left unexplained.

Third, the section endorses a more “rigorous and empiricism-driven approach” to the FTC’s work, a “more interdisciplinary approach” that incorporates “a greater range of analytical tools and skillsets.” This recommendation is not problematic on its face, though it is a bit puzzling. The FTC already relies heavily on economics and empirical work, as well as input from technologists, advertising specialists, and other subject matter experts, as required. What other skillsets are being endorsed? (A more far-reaching application of economic thinking in certain consumer-protection cases would be helpful, but one suspects that is not the point of the paragraph.)

Fourth, the need to be especially attentive to next-generation technologies, innovations, and nascent industries is trumpeted. Fine, but the FTC already does that in its competition and consumer-protection investigations.

Finally, the need to “democratize” the agency is highlighted, to keep the FTC in tune with “the real problems that Americans are facing in their daily lives and using that understanding to inform our work.” This statement seems to imply that the FTC is not adequately dealing with “real problems.” The FTC, however, has not been designated by Congress to be a general-purpose problem solver. Rather, the agency has a specific statutory remit to combat anticompetitive activity and unfair acts or practices that harm consumers. Ironically, under Chair Khan, the FTC has abruptly implemented major changes in key areas (including rulemaking, the withdrawal of guidance, and merger-review practices) without prior public input or consultation among the commissioners (see, for example, here)—actions that could be deemed undemocratic.

Policy Priorities

The memo’s brief discussion of Khan’s policy priorities raises three significant concerns.

First, Khan stresses the “need to address rampant consolidation and the dominance that it has enabled across markets” in the areas of merger enforcement and dominant-firm scrutiny. The claim that competition has substantially diminished has been critiqued by leading economists, and is dubious at best (see, for example, here). This flat assertion is jarring, and in tension with the earlier call for more empirical analysis. Khan’s call for revision of the merger guidelines (presumably both horizontal and vertical), in tandem with the U.S. Justice Department (DOJ), will be headed for trouble if it departs from the economic reasoning that has informed prior revisions of those guidelines. (The memo’s critical and cryptic reference to the “narrow and outdated framework” of recent guidelines provides no clue as to the new guidelines format that Chair Khan might deem acceptable.) 

Second, the chair supports prioritizing “dominant intermediaries” and “extractive business models,” while raising concerns about “private equity and other investment vehicles” that “strip productive capacity” and “target marginalized communities.” No explanation is given as to why such prioritization will best utilize the FTC’s scarce resources to root out harmful anticompetitive behavior and consumer-protection harms. By assuming from the outset that certain “unsavory actors” merit prioritization, this discussion also is in tension with an empirical approach that dispassionately examines the facts in determining how resources should best be allocated to maximize the benefits of enforcement.

Third, the chair wants to direct special attention to “one-sided contract provisions” that place “[c]onsumers, workers, franchisees, and other market participants … at a significant disadvantage.” Non-competes, repair restrictions, and exclusionary clauses are mentioned as examples. What is missing is a realistic acknowledgement of the legal complications that would be involved in challenging such provisions, and a recognition of possible welfare benefits that such restraints could generate under many circumstances. In that vein, mere perceived inequalities in bargaining power alluded to in the discussion do not, in and of themselves, constitute antitrust or consumer-protection violations.

Operational Objectives

The closing section, on “operational objectives,” is not particularly troublesome. It supports an “integrated approach” to enforcement and policy tools, and endorses “breaking down silos” between competition (BC) and consumer-protection (BCP) staff. (Of course, while greater coordination between BC and BCP occasionally may be desirable, competition and consumer-protection cases will continue to feature significant subject matter and legal differences.) It also calls for greater diversity in recruitment and a greater staffing emphasis on regional offices. Finally, it endorses bringing in more experts from “outside disciplines” and more rigorous analysis of conduct, remedies, and market studies. These points, although not controversial, do not directly come to grip with questions of optimal resource allocation within the agency, which the FTC will have to address.

Evaluating the VP Memo: 4 Key Takeaways

The VP Memo is a highly aggressive call-to-arms that embodies Chair Khan’s full-blown progressive vision for the FTC. There are four key takeaways:

  1. Promoting the consumer interest, which for decades has been the overarching principle in both FTC antitrust and consumer-protection cases (which address different sources of consumer harm), is passé. Protecting consumers is only referred to in passing. Rather, the concerns of workers, “honest businesses,” and “marginalized communities” are emphasized. Courts will, however, continue to focus on established consumer-welfare and consumer-harm principles in ruling on antitrust and consumer-protection cases. If the FTC hopes to have any success in winning future cases based on novel forms of harm, it will have to ensure that its new case-selection criteria also emphasize behavior that harms consumers.
  2. Despite multiple references to empiricism and analytical rigor, the VP Memo ignores the potential economic-welfare benefits of the categories of behavior it singles out for condemnation. The memo’s critiques of “middlemen,” “gatekeepers,” “extractive business models,” “private equity,” and various types of vertical contracts, reference conduct that frequently promotes efficiency, generating welfare benefits for producers and consumers. Even if FTC lawsuits or regulations directed at these practices fail, the business uncertainty generated by the critiques could well disincentivize efficient forms of conduct that spark innovation and economic growth.
  3. The VP Memo in effect calls for new enforcement initiatives that challenge conduct different in nature from FTC cases brought in recent decades. This implicit support for lawsuits that would go well beyond existing judicial interpretations of the FTC’s competition and consumer-protection authority reflects unwarranted hubris. This April, in the AMG case, the U.S. Supreme Court unanimously rejected the FTC’s argument that it had implicit authority to obtain monetary relief under Section 13(b) of the FTC Act, which authorizes permanent injunctions – despite the fact that several appellate courts had found such authority existed. The Court stated that the FTC could go to Congress if it wanted broader authority. This decision bodes ill for any future FTC efforts to expand its authority into new realms of “unfair” activity through “creative” lawyering.
  4. Chair Khan’s unilateral statement of her policy priorities embodied in the VP Memo bespeaks a lack of humility. It ignores a long history of consensus FTC statements on agency priorities, reflected in numerous commission submissions to congressional committees in connection with oversight hearings. Although commissioners have disagreed on specific policy statements or enforcement complaints, general “big picture” policy statements to congressional overseers typically have been by unanimous vote. By ignoring this tradition, the VP Memo departs from a longstanding bipartisan tradition that will tend to undermine the FTC’s image as a serious deliberative body that seeks to reconcile varying viewpoints (while recognizing that, at times, different positions will be expressed on particular matters). If the FTC acts more and more like a one-person executive agency, why does it need to be “independent,” and, indeed, what special purpose does it serve as a second voice on federal antitrust matters? Under seeming unilateral rule, the prestige of the FTC before federal courts may suffer, undermining its effectiveness in defending enforcement actions and promulgating rules. This will particularly be the case if more and more FTC decisions are taken by a 3-2 vote and appear to reflect little or no consultation with minority commissioners.

Conclusion

The VP Memo reflects a lack of humility and strategic insight. It sets forth priorities that are disconnected from the traditional core of the FTC’s consumer-welfare-centric mission. It emphasizes new sorts of initiatives that are likely to “crash and burn” in the courts, unless they are better anchored to established case law and FTC enforcement principles. As a unilateral missive announcing an unprecedented change in policy direction, the memo also undermines the tradition of collegiality and reasoned debate that generally has characterized the commission’s activities in recent decades.

As such, the memo will undercut, not advance, the effectiveness of FTC advocacy before the courts. It will also undermine the FTC’s reputation as a truly independent deliberative body. Accordingly, one may hope that Chair Khan will rethink her approach, withdraw the VP Memo, and work with all of her fellow commissioners to recraft a new consensus policy document.   

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

The language of the federal antitrust laws is extremely general. Over more than a century, the federal courts have applied common-law techniques to construe this general language to provide guidance to the private sector as to what does or does not run afoul of the law. The interpretive process has been fraught with some uncertainty, as judicial approaches to antitrust analysis have changed several times over the past century. Nevertheless, until very recently, judges and enforcers had converged toward relying on a consumer welfare standard as the touchstone for antitrust evaluations (see my antitrust primer here, for an overview).

While imperfect and subject to potential error in application—a problem of legal interpretation generally—the consumer welfare principle has worked rather well as the focus both for antitrust-enforcement guidance and judicial decision-making. The general stability and predictability of antitrust under a consumer welfare framework has advanced the rule of law. It has given businesses sufficient information to plan transactions in a manner likely to avoid antitrust liability. It thereby has cabined uncertainty and increased the probability that private parties would enter welfare-enhancing commercial arrangements, to the benefit of society.

In a very thoughtful 2017 speech, then Acting Assistant Attorney General for Antitrust Andrew Finch commented on the importance of the rule of law to principled antitrust enforcement. He noted:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

That is certainly not to say there has not been positive change in the antitrust laws in the past, or that we would have been better off without those changes. U.S. antitrust law has been refined, and occasionally recalibrated, with the courts playing their appropriate interpretive role. And enforcers must always be on the watch for new or evolving threats to competition.  As markets evolve and products develop over time, our analysis adapts. But as those changes occur, we pursue reliability and consistency in application in the antitrust laws as much as possible.

Indeed, we have enjoyed remarkable continuity and consensus for many years. Antitrust law in the U.S. has not been a “paradox” for quite some time, but rather a stable and valuable law enforcement regime with appropriately widespread support.

Unfortunately, policy decisions taken by the new Federal Trade Commission (FTC) leadership in recent weeks have rejected antitrust continuity and consensus. They have injected substantial uncertainty into the application of competition-law enforcement by the FTC. This abrupt change in emphasis undermines the rule of law and threatens to reduce economic welfare.

As of now, the FTC’s departure from the rule of law has been notable in two areas:

  1. Its rejection of previous guidance on the agency’s “unfair methods of competition” authority, the FTC’s primary non-merger-related enforcement tool; and
  2. Its new advice rejecting time limits for the review of generally routine proposed mergers.

In addition, potential FTC rulemakings directed at “unfair methods of competition” would, if pursued, prove highly problematic.

Rescission of the Unfair Methods of Competition Policy Statement

The FTC on July 1 voted 3-2 to rescind the 2015 FTC Policy Statement Regarding Unfair Methods of Competition under Section 5 of the FTC Act (UMC Policy Statement).

The bipartisan UMC Policy Statement has originally been supported by all three Democratic commissioners, including then-Chairwoman Edith Ramirez. The policy statement generally respected and promoted the rule of law by emphasizing that, in applying the facially broad “unfair methods of competition” (UMC) language, the FTC would be guided by the well-established principles of the antitrust rule of reason (including considering any associated cognizable efficiencies and business justifications) and the consumer welfare standard. The FTC also explained that it would not apply “standalone” Section 5 theories to conduct that would violate the Sherman or Clayton Acts.

In short, the UMC Policy Statement sent a strong signal that the commission would apply UMC in a manner fully consistent with accepted and well-understood antitrust policy principles. As in the past, the vast bulk of FTC Section 5 prosecutions would be brought against conduct that violated the core antitrust laws. Standalone Section 5 cases would be directed solely at those few practices that harmed consumer welfare and competition, but somehow fell into a narrow crack in the basic antitrust statutes (such as, perhaps, “invitations to collude” that lack plausible efficiency justifications). Although the UMC Statement did not answer all questions regarding what specific practices would justify standalone UMC challenges, it substantially limited business uncertainty by bringing Section 5 within the boundaries of settled antitrust doctrine.

The FTC’s announcement of the UMC Policy Statement rescission unhelpfully proclaimed that “the time is right for the Commission to rethink its approach and to recommit to its mandate to police unfair methods of competition even if they are outside the ambit of the Sherman or Clayton Acts.” As a dissenting statement by Commissioner Christine S. Wilson warned, consumers would be harmed by the commission’s decision to prioritize other unnamed interests. And as Commissioner Noah Joshua Phillips stressed in his dissent, the end result would be reduced guidance and greater uncertainty.

In sum, by suddenly leaving private parties in the dark as to how to conform themselves to Section 5’s UMC requirements, the FTC’s rescission offends the rule of law.

New Guidance to Parties Considering Mergers

For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:

  1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
  2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.

Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.

Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.

Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”

The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.

Perhaps more significantly, as three prominent antitrust practitioners point out, the FTC’s warning letters states that:

[T]he FTC may challenge deals that “threaten to reduce competition and harm consumers, workers, and honest businesses.” Adding in harm to both “workers and honest businesses” implies that the FTC may be considering more ways that transactions can have an adverse impact other than just harm to competition and consumers [citation omitted].

Because consensus antitrust merger analysis centers on consumer welfare, not the protection of labor or business interests, any suggestion that the FTC may be extending its reach to these new areas is inconsistent with established legal principles and generates new business-planning risks.

More generally, the Aug. 6 FTC “blog post could be viewed as an attempt to modify the temporal framework of the HSR Act”—in effect, an effort to displace an implicit statutory understanding in favor of an agency diktat, contrary to the rule of law. Commissioner Wilson sees the blog post as a means to keep investigations open indefinitely and, thus, an attack on the decades-old HSR framework for handling most merger reviews in an expeditious fashion (see here). Commissioner Phillips is concerned about an attempt to chill legal M&A transactions across the board, particularly unfortunate when there is no reason to conclude that particular transactions are illegal (see here).

Finally, the historical record raises serious questions about the “resource constraint” justification for the FTC’s new merger review policies:

Through the end of July 2021, more than 2,900 transactions were reported to the FTC. It is not clear, however, whether these record-breaking HSR filing numbers have led (or will lead) to more deals being investigated. Historically, only about 13 percent of all deals reported are investigated in some fashion, and roughly 3 percent of all deals reported receive a more thorough, substantive review through the issuance of a Second Request. Even if more deals are being reported, for the majority of transactions, the HSR process is purely administrative, raising no antitrust concerns, and, theoretically, uses few, if any, agency resources. [Citations omitted.]

Proposed FTC Competition Rulemakings

The new FTC leadership is strongly considering competition rulemakings. As I explained in a recent Truth on the Market post, such rulemakings would fail a cost-benefit test. They raise serious legal risks for the commission and could impose wasted resource costs on the FTC and on private parties. More significantly, they would raise two very serious economic policy concerns:

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules. [Emphasis added.]

In short, common law antitrust adjudication, focused on the consumer welfare standard, has done a good job of promoting a vibrant competitive economy in an efficient fashion. FTC competition rulemaking would not.

Conclusion

Recent FTC actions have undermined consensus antitrust-enforcement standards and have departed from established merger-review procedures with respect to seemingly uncontroversial consolidations. Those decisions have imposed costly uncertainty on the business sector and are thereby likely to disincentivize efficiency-seeking arrangements. What’s more, by implicitly rejecting consensus antitrust principles, they denigrate the primacy of the rule of law in antitrust enforcement. The FTC’s pursuit of competition rulemaking would further damage the rule of law by imposing arbitrary strictures that ignore matter-specific considerations bearing on the justifications for particular business decisions.

Fortunately, these are early days in the Biden administration. The problematic initial policy decisions delineated in this comment could be reversed based on further reflection and deliberation within the commission. Chairwoman Lina Khan and her fellow Democratic commissioners would benefit by consulting more closely with Commissioners Wilson and Phillips to reach agreement on substantive and procedural enforcement policies that are better tailored to promote consumer welfare and enhance vibrant competition. Such policies would benefit the U.S. economy in a manner consistent with the rule of law.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Geoffrey A. Manne is the president and founder of the International Center for Law and Economics.]

I’m delighted to add my comments to the chorus of voices honoring Ajit Pai’s remarkable tenure at the Federal Communications Commission. I’ve known Ajit longer than most. We were classmates in law school … let’s just say “many” years ago. Among the other symposium contributors I know of only one—fellow classmate, Tom Nachbar—who can make a similar claim. I wish I could say this gives me special insight into his motivations, his actions, and the significance of his accomplishments, but really it means only that I have endured his dad jokes and interminable pop-culture references longer than most. 

But I can say this: Ajit has always stood out as a genuinely humble, unfailingly gregarious, relentlessly curious, and remarkably intelligent human being, and he deployed these characteristics to great success at the FCC.   

Ajit’s tenure at the FCC was marked by an abiding appreciation for the importance of competition, both as a guiding principle for new regulations and as a touchstone to determine when to challenge existing ones. As others have noted (and as we have written elsewhere), that approach was reflected significantly in the commission’s Restoring Internet Freedom Order, which made competition—and competition enforcement by the antitrust agencies—the centerpiece of the agency’s approach to net neutrality. But I would argue that perhaps Chairman Pai’s greatest contribution to bringing competition to the forefront of the FCC’s mandate came in his work on media modernization.

Fairly early in his tenure at the commission, Ajit raised concerns with the FCC’s failure to modernize its media-ownership rules. In response to the FCC’s belated effort to initiate the required 2010 and 2014 Quadrennial Reviews of those rules, then-Commissioner Pai noted that the commission had abdicated its responsibility under the statute to promote competition. Not only was the FCC proposing to maintain a host of outdated existing rules, but it was also moving to impose further constraints (through new limitations on the use of Joint Sales Agreements (JSAs)). As Ajit noted, such an approach was antithetical to competition:

In smaller markets, the choice is not between two stations entering into a JSA and those same two stations flourishing while operating completely independently. Rather, the choice is between two stations entering into a JSA and at least one of those stations’ viability being threatened. If stations in these smaller markets are to survive and provide many of the same services as television stations in larger markets, they must cut costs. And JSAs are a vital mechanism for doing that.

The efficiencies created by JSAs are not a luxury in today’s digital age. They are necessary, as local broadcasters face fierce competition for viewers and advertisers.

Under then-Chairman Tom Wheeler, the commission voted to adopt the Quadrennial Review in 2016, issuing rules that largely maintained the status quo and, at best, paid tepid lip service to the massive changes in the competitive landscape. As Ajit wrote in dissent:

The changes to the media marketplace since the FCC adopted the Newspaper-Broadcast Cross-Ownership Rule in 1975 have been revolutionary…. Yet, instead of repealing the Newspaper-Broadcast Cross-Ownership Rule to account for the massive changes in how Americans receive news and information, we cling to it.

And over the near-decade since the FCC last finished a “quadrennial” review, the video marketplace has transformed dramatically…. Yet, instead of loosening the Local Television Ownership Rule to account for the increasing competition to broadcast television stations, we actually tighten that regulation.

And instead of updating the Local Radio Ownership Rule, the Radio-Television Cross-Ownership Rule, and the Dual Network Rule, we merely rubber-stamp them.

The more the media marketplace changes, the more the FCC’s media regulations stay the same.

As Ajit also accurately noted at the time:

Soon, I expect outside parties to deliver us to the denouement: a decisive round of judicial review. I hope that the court that reviews this sad and total abdication of the administrative function finds, once and for all, that our media ownership rules can no longer stay stuck in the 1970s consistent with the Administrative Procedure Act, the Communications Act, and common sense. The regulations discussed above are as timely as “rabbit ears,” and it’s about time they go the way of those relics of the broadcast world. I am hopeful that the intervention of the judicial branch will bring us into the digital age.

And, indeed, just this week the case was argued before the Supreme Court.

In the interim, however, Ajit became Chairman of the FCC. And in his first year in that capacity, he took up a reconsideration of the 2016 Order. This 2017 Order on Reconsideration is the one that finally came before the Supreme Court. 

Consistent with his unwavering commitment to promote media competition—and no longer a minority commissioner shouting into the wind—Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers:

Today we end the 2010/2014 Quadrennial Review proceeding. In doing so, the Commission not only acknowledges the dynamic nature of the media marketplace, but takes concrete steps to update its broadcast ownership rules to reflect reality…. In this Order on Reconsideration, we refuse to ignore the changed landscape and the mandates of Section 202(h), and we deliver on the Commission’s promise to adopt broadcast ownership rules that reflect the present, not the past. Because of our actions today to relax and eliminate outdated rules, broadcasters and local newspapers will at last be given a greater opportunity to compete and thrive in the vibrant and fast-changing media marketplace. And in the end, it is consumers that will benefit, as broadcast stations and newspapers—those media outlets most committed to serving their local communities—will be better able to invest in local news and public interest programming and improve their overall service to those communities.

Ajit’s approach was certainly deregulatory. But more importantly, it was realistic, well-reasoned, and responsive to changing economic circumstances. Unlike most of his predecessors, Ajit was unwilling to accede to the torpor of repeated judicial remands (on dubious legal grounds, as we noted in our amicus brief urging the Court to grant certiorari in the case), permitting facially and wildly outdated rules to persist in the face of massive and obvious economic change. 

Like Ajit, I am not one to advocate regulatory action lightly, especially in the (all-too-rare) face of judicial review that suggests an agency has exceeded its discretion. But in this case, the need for dramatic rule change—here, to deregulate—was undeniable. The only abuse of discretion was on the part of the court, not the agency. As we put it in our amicus brief:

[T]he panel vacated these vital reforms based on mere speculation that they would hinder minority and female ownership, rather than grounding its action on any record evidence of such an effect. In fact, the 2017 Reconsideration Order makes clear that the FCC found no evidence in the record supporting the court’s speculative concern.

…In rejecting the FCC’s stated reasons for repealing or modifying the rules, absent any evidence in the record to the contrary, the panel substituted its own speculative concerns for the judgment of the FCC, notwithstanding the FCC’s decades of experience regulating the broadcast and newspaper industries. By so doing, the panel exceeded the bounds of its judicial review powers under the APA.

Key to Ajit’s conclusion that competition in local media markets could be furthered by permitting more concentration was his awareness that the relevant market for analysis couldn’t be limited to traditional media outlets like broadcasters and newspapers; it must include the likes of cable networks, streaming video providers, and social-media platforms, as well. As Ajit put it in a recent speech:

The problem is a fundamental refusal to grapple with today’s marketplace: what the service market is, who the competitors are, and the like. When assessing competition, some in Washington are so obsessed with the numerator, so to speak—the size of a particular company, for instance—that they’ve completely ignored the explosion of the denominator—the full range of alternatives in media today, many of which didn’t exist a few years ago.

When determining a particular company’s market share, a candid assessment of the denominator should include far more than just broadcast networks or cable channels. From any perspective (economic, legal, or policy), it should include any kinds of media consumption that consumers consider to be substitutes. That could be TV. It could be radio. It could be cable. It could be streaming. It could be social media. It could be gaming. It could be still something else. The touchstone of that denominator should be “what content do people choose today?”, not “what content did people choose in 1975 or 1992, and how can we artificially constrict our inquiry today to match that?”

For some reason, this simple and seemingly undeniable conception of the market escapes virtually all critics of Ajit’s media-modernization agenda. Indeed, even Justice Stephen Breyer in this week’s oral argument seemed baffled by the notion that more concentration could entail more competition:

JUSTICE BREYER: I’m thinking of it solely as a — the anti-merger part, in — in anti-merger law, merger law generally, I think, has a theory, and the theory is, beyond a certain point and other things being equal, you have fewer companies in a market, the harder it is to enter, and it’s particularly harder for smaller firms. And, here, smaller firms are heavily correlated or more likely to be correlated with women and minorities. All right?

The opposite view, which is what the FCC has now chosen, is — is they want to move or allow to be moved towards more concentration. So what’s the theory that that wouldn’t hurt the minorities and women or smaller businesses? What’s the theory the opposite way, in other words? I’m not asking for data. I’m asking for a theory.

Of course, as Justice Breyer should surely know—and as I know Ajit Pai knows—counting the number of firms in a market is a horrible way to determine its competitiveness. In this case, the competition from internet media platforms, particularly for advertising dollars, is immense. A regulatory regime that prohibits traditional local-media outlets from forging efficient joint ventures or from obtaining the scale necessary to compete with those platforms does not further competition. Even if such a rule might temporarily result in more media outlets, eventually it would result in no media outlets, other than the large online platforms. The basic theory behind the Reconsideration Order—to answer Justice Breyer—is that outdated government regulation imposes artificial constraints on the ability of local media to adopt the organizational structures necessary to compete. Removing those constraints may not prove a magic bullet that saves local broadcasters and newspapers, but allowing the rules to remain absolutely ensures their demise. 

Ajit’s commitment to furthering competition in telecommunications markets remained steadfast throughout his tenure at the FCC. From opposing restrictive revisions to the agency’s spectrum screen to dissenting from the effort to impose a poorly conceived and retrograde regulatory regime on set-top boxes, to challenging the agency’s abuse of its merger review authority to impose ultra vires regulations, to, of course, rolling back his predecessor’s unsupportable Title II approach to net neutrality—and on virtually every issue in between—Ajit sought at every turn to create a regulatory backdrop conducive to competition.

Tom Wheeler, Pai’s predecessor at the FCC, claimed that his personal mantra was “competition, competition, competition.” His greatest legacy, in that regard, was in turning over the agency to Ajit.

In an age of antitrust populism on both ends of the political spectrum, federal and state regulators face considerable pressure to deploy the antitrust laws against firms that have dominant market shares. Yet federal case law makes clear that merely winning the race for a market is an insufficient basis for antitrust liability. Rather, any plaintiff must show that the winner either secured or is maintaining its dominant position through practices that go beyond vigorous competition. Any other principle would inhibit the competitive process that the antitrust laws are designed to promote. Federal judges who enjoy life tenure are far more insulated from outside pressures and therefore more likely to demand evidence of anticompetitive practices as a predicate condition for any determination of antitrust liability.

This separation of powers between the executive branch, which prosecutes alleged infractions of the law, and the judicial branch, which polices the prosecutor, is the simple genius behind the divided system of government generally attributed to the eighteenth-century French thinker, Montesquieu. The practical wisdom of this fundamental principle of political design, which runs throughout the U.S. Constitution, can be observed in full force in the current antitrust landscape, in which the federal courts have acted as a bulwark against several contestable enforcement actions by antitrust regulators.

In three headline cases brought by the Department of Justice or the Federal Trade Commission since 2017, the prosecutorial bench has struck out in court. Under the exacting scrutiny of the judiciary, government litigators failed to present sufficient evidence that a dominant firm had engaged in practices that caused, or were likely to cause, significant anticompetitive effects. In each case, these enforcement actions, applauded by policymakers and commentators who tend to follow “big is bad” intuitions, foundered when assessed in light of judicial precedent, the factual record, and the economic principles embedded in modern antitrust law. An ongoing suit, filed by the FTC this year after more than 18 months since the closing of the targeted acquisition, exhibits similar factual and legal infirmities.

Strike 1: The AT&T/Time-Warner Transaction

In response to the announcement of AT&T’s $85.4 billion acquisition of Time Warner, the DOJ filed suit in 2017 to prevent the formation of a dominant provider in home-video distribution that would purportedly deny competitors access to “must-have” content. As I have observed previously, this theory of the case suffered from two fundamental difficulties. 

First, content is an abundant and renewable resource so it is hard to see how AT&T+TW could meaningfully foreclose competitors’ access to this necessary input. Even in the hypothetical case of potentially “must-have” content, it was unclear whether it would be economically rational for post-acquisition AT&T regularly to deny access to other distributors, given that doing so would imply an immediate and significant loss in licensing revenues without any clearly offsetting future gain in revenues from new subscribers.

Second, home-video distribution is a market lapsing rapidly into obsolescence as content monetization shifts from home-based viewing to a streaming environment in which consumers expect “anywhere, everywhere” access. The blockbuster acquisition was probably best understood as a necessary effort to adapt to this new environment (already populated by several major streaming platforms), rather than an otherwise puzzling strategy to spend billions to capture a market on the verge of commercial irrelevance. 

Strike 2: The Sabre/Farelogix Acquisition

In 2019, the DOJ filed suit to block the $360 million acquisition of Farelogix by Sabre, one of three leading airline booking platforms, on the ground that it would substantially lessen competition. The factual basis for this legal diagnosis was unclear. In 2018, Sabre earned approximately $3.9 billion in worldwide revenues, compared to $40 million for Farelogix. Given this drastic difference in market share, and the almost trivial share attributable to Farelogix, it is difficult to fathom how the DOJ could credibly assert that the acquisition “would extinguish a crucial constraint on Sabre’s market power.” 

To use a now much-discussed theory of antitrust liability, it might nonetheless be argued that Farelogix posed a “nascent” competitive threat to the Sabre platform. That is: while Farelogix is small today, it may become big enough tomorrow to pose a threat to Sabre’s market leadership. 

But that theory runs straight into a highly inconvenient fact. Farelogix was founded in 1998 and, during the ensuing two decades, had neither achieved broad adoption of its customized booking technology nor succeeded in offering airlines a viable pathway to bypass the three major intermediary platforms. The proposed acquisition therefore seems best understood as a mutually beneficial transaction in which a smaller (and not very nascent) firm elects to monetize its technology by embedding it in a leading platform that seeks to innovate by acquisition. Robust technology ecosystems do this all the time, efficiently exploiting the natural complementarities between a smaller firm’s “out of the box” innovation with the capital-intensive infrastructure of an incumbent. (Postscript: While the DOJ lost this case in federal court, Sabre elected in May 2020 not to close following similarly puzzling opposition by British competition regulators.) 

Strike 3: FTC v. Qualcomm

The divergence of theories of anticompetitive risk from market realities is vividly illustrated by the landmark suit filed by the FTC in 2017 against Qualcomm. 

The litigation pursued nothing less than a wholesale reengineering of the IP licensing relationships between innovators and implementers that underlie the global smartphone market. Those relationships principally consist of device-level licenses between IP innovators such as Qualcomm and device manufacturers and distributors such as Apple. This structure efficiently collects remuneration from the downstream segment of the supply chain for upstream firms that invest in pushing forward the technology frontier. The FTC thought otherwise and pursued a remedy that would have required Qualcomm to offer licenses to its direct competitors in the chip market and to rewrite its existing licenses with device producers and other intermediate users on a component, rather than device, level. 

Remarkably, these drastic forms of intervention into private-ordering arrangements rested on nothing more than what former FTC Commissioner Maureen Ohlhausen once appropriately called a “possibility theorem.” The FTC deployed a mostly theoretical argument that Qualcomm had extracted an “unreasonably high” royalty that had potentially discouraged innovation, impeded entry into the chip market, and inflated retail prices for consumers. Yet these claims run contrary to all available empirical evidence, which indicates that the mobile wireless device market has exhibited since its inception declining quality-adjusted prices, increasing output, robust entry into the production market, and continuous innovation. The mismatch between the government’s theory of market failure and the actual record of market success over more than two decades challenges the policy wisdom of disrupting hundreds of existing contractual arrangements between IP licensors and licensees in a thriving market. 

The FTC nonetheless secured from the district court a sweeping order that would have had precisely this disruptive effect, including imposing a “duty to deal” that would have required Qualcomm to license directly its competitors in the chip market. The Ninth Circuit stayed the order and, on August 11, 2020, issued an unqualified reversal, stating that the lower court had erroneously conflated “hypercompetitive” (good) with anticompetitive (bad) conduct and observing that “[t]hroughout its analysis, the district court conflated the desire to maximize profits with an intent to ‘destroy competition itself.’” In unusually direct language, the appellate court also observed (as even the FTC had acknowledged on appeal) that the district court’s ruling was incompatible with the Supreme Court’s ruling in Aspen Skiing Co. v. Aspen Highlands Skiing Corp., which strictly limits the circumstances in which a duty to deal can be imposed. In some cases, it appears that additional levels of judicial review are necessary to protect antitrust law against not only administrative but judicial overreach.

Axon v. FTC

For the most explicit illustration of the interface between Montesquieu’s principle of divided government and the risk posed to antitrust law by cases of prosecutorial excess, we can turn to an unusual and ongoing litigation, Axon v. FTC.

The HSR Act and Post-Consummation Merger Challenges

The HSR Act provides regulators with the opportunity to preemptively challenge acquisitions and related transactions on antitrust grounds prior to those transactions having been consummated. Since its enactment in 1976, this statutory innovation has laudably increased dealmakers’ ability to close transactions with a high level of certainty that regulators would not belatedly seek to “unscramble the egg.” While the HSR Act does not foreclose this contingency since regulatory failure to challenge a transaction only indicates current enforcement intentions, it is probably fair to say that M&A dealmakers generally assume that regulators would reverse course only in exceptional circumstances. In turn, the low prospect of after-the-fact regulatory intervention encourages the efficient use of M&A transactions for the purpose of shifting corporate assets to users that value those assets most highly.

The FTC’s Belated Attack on the Axon/Vievu Acquisition

Dealmakers may be revisiting that understanding in the wake of the FTC’s decision in January 2020 to challenge the acquisition of Vievu by Axon, each being a manufacturer of body-worn camera equipment and related data-management software for law enforcement agencies. The acquisition had closed in May 2018 but had not been reported through HSR since it fell well below the reportable deal threshold. Given a total transaction value of $7 million, the passage of more than 18 months since closing, and the insolvency or near-insolvency of the target company, it is far from obvious that the Axon acquisition posed a material competitive risk that merits unsettling expectations that regulators will typically not challenge a consummated transaction, especially in the case of what is a micro-sized nebula in the M&A universe. 

These concerns are heightened by the fact that the FTC suit relies on a debatably narrow definition of the relevant market (body-camera equipment and related “cloud-based” data management software for police departments in large metropolitan areas, rather than a market that encompassed more generally defined categories of body-worn camera equipment, law enforcement agencies, and data management services). Even within this circumscribed market, there are apparently several companies that offer related technologies and an even larger group that could plausibly enter in response to perceived profit opportunities. Despite this contestable legal position, Axon’s court filing states that the FTC offered to settle the suit on stiff terms: Axon must agree to divest itself of the Vievu assets and to license all of Axon’s pre-transaction intellectual property to the buyer of the Vievu assets. This effectively amounts to an opportunistic use of the antitrust merger laws to engage in post-transaction market reengineering, rather than merely blocking an acquisition to maintain the pre-transaction status quo.

Does the FTC Violate the Separation of Powers?

In a provocative strategy, Axon has gone on the offensive and filed suit in federal district court to challenge on constitutional grounds the long-standing internal administrative proceeding through which the FTC’s antitrust claims are initially adjudicated. Unlike the DOJ, the FTC’s first stop in the litigation process (absent settlement) is not a federal district court but an internal proceeding before an administrative law judge (“ALJ”), whose ruling can then be appealed to the Commission. Axon is effectively arguing that this administrative internalization of the judicial function violates the separation of powers principle as implemented in the U.S. Constitution. 

Writing on a clean slate, Axon’s claim is eminently reasonable. The fact that FTC-paid personnel sit on both sides of the internal adjudicative process as prosecutor (the FTC litigation team) and judge (the ALJ and the Commissioners) locates the executive and judicial functions in the hands of a single administrative entity. (To be clear, the Commission’s rulings are appealable to federal court, albeit at significant cost and delay.) In any event, a court presented with Axon’s claim—as of this writing, the Ninth Circuit (taking the case on appeal by Axon)—is not writing on a clean slate and is most likely reluctant to accept a claim that would trigger challenges to the legality of other similarly structured adjudicative processes at other agencies. Nonetheless, Axon’s argument does raise important concerns as to whether certain elements of the FTC’s adjudicative mechanism (as distinguished from the very existence of that mechanism) could be refined to mitigate the conflicts of interest that arise in its current form.

Conclusion

Antitrust vigilance certainly has its place, but it also has its limits. Given the aspirational language of the antitrust statutes and the largely unlimited structural remedies to which an antitrust litigation can lead, there is an inevitable risk of prosecutorial overreach that can betray the fundamental objective to protect consumer welfare. Applied to the antitrust context, the separation of powers principle mitigates this risk by subjecting enforcement actions to judicial examination, which is in turn disciplined by the constraints of appellate review and stare decisis. A rich body of federal case law implements this review function by anchoring antitrust in a decisionmaking framework that promotes the public’s interest in deterring business practices that endanger the competitive process behind a market-based economy. As illustrated by the recent string of failed antitrust suits, and the ongoing FTC litigation against Axon, that same decisionmaking framework can also protect the competitive process against regulatory practices that pose this same type of risk.

Recently-published emails from 2012 between Mark Zuckerberg and Facebook’s then-Chief Financial Officer David Ebersman, in which Zuckerberg lays out his rationale for buying Instagram, have prompted many to speculate that the deal may not have been cleared had antitrust agencies had had access to Facebook’s internal documents at the time.

The issue is Zuckerberg’s description of Instagram as a nascent competitor and potential threat to Facebook:

These businesses are nascent but the networks established, the brands are already meaningful, and if they grow to a large scale they could be very disruptive to us. Given that we think our own valuation is fairly aggressive and that we’re vulnerable in mobile, I’m curious if we should consider going after one or two of them. 

Ebersman objects that a new rival would just enter the market if Facebook bought Instagram. In response, Zuckerberg wrote:

There are network effects around social products and a finite number of different social mechanics to invent. Once someone wins at a specific mechanic, it’s difficult for others to supplant them without doing something different.

These email exchanges may not paint a particularly positive picture of Zuckerberg’s intent in doing the merger, and it is possible that at the time they may have caused antitrust agencies to scrutinise the merger more carefully. But they do not tell us that the acquisition was ultimately harmful to consumers, or about the counterfactual of the merger being blocked. While we know that Instagram became enormously popular in the years following the merger, it is not clear that it would have been just as successful without the deal, or that Facebook and its other products would be less popular today. 

Moreover, it fails to account for the fact that Facebook had the resources to quickly scale Instagram up to a level that provided immediate benefits to an enormous number of users, instead of waiting for the app to potentially grow to such scale organically. 

The rationale

Writing for Pro Market, Randy Picker argued that these emails hint that the acquisition was essentially about taking out a nascent competitor:

Buying Instagram really was about controlling the window in which the Instagram social mechanic invention posed a risk to Facebook … Facebook well understood the competitive risk posed by Instagram and how purchasing it would control that risk.

This is a plausible interpretation of the internal emails, although there are others. For instance, Zuckerberg also seems to say that the purpose is to use Instagram to improve Facebook to make it good enough to fend off other entrants:

If we incorporate the social mechanics they were using, those new products won’t get much traction since we’ll already have their mechanics deployed at scale. 

If this was the rationale, rather than simply trying to kill a nascent competitor, it would be pro-competitive. It is good for consumers if a product makes itself better to beat its rivals by acquiring undervalued assets to deploy them at greater scale and with superior managerial efficiency, even if the acquirer hopes that in doing so it will prevent rivals from ever gaining significant market share. 

Further, despite popular characterization, on its face the acquisition was not about trying to destroy a consumer option, but only to ensure that Facebook was competitively viable in providing that option. Another reasonable interpretation of the emails is that Facebook was wrestling with the age-old make-or-buy dilemma faced by every firm at some point or another. 

Was the merger anticompetitive?

But let us assume that eliminating competition from Instagram was indeed the merger’s sole rationale. Would that necessarily make it anticompetitive?  

Chief among the objections is that both Facebook and Instagram are networked goods. Their value to each user depends, to a significant extent, on the number (and quality) of other people using the same platform. Many scholars have argued that this can create self-reinforcing dynamics where the strong grow stronger – though such an outcome is certainly not a given, since other factors about the service matter too, and networks can suffer from diseconomies of scale as well, where new users reduce the quality of the network.

This network effects point is central to the reasoning of those who oppose the merger: Facebook purportedly acquired Instagram because Instagram’s network had grown large enough to be a threat. With Instagram out of the picture, Facebook could thus take on the remaining smaller rivals with the advantage of its own much larger installed base of users. 

However, this network tipping argument could cut both ways. It is plausible that the proper counterfactual was not duopoly competition between Facebook and Instagram, but either Facebook or Instagram offering both firms’ features (only later). In other words, a possible framing of the merger is that it merely  accelerated the cross-pollination of social mechanics between Facebook and Instagram. Something that would likely prove beneficial to consumers.

This finds some support in Mark Zuckerberg’s reply to David Ebersman:

Buying them would give us the people and time to integrate their innovations into our core products.

The exchange between Zuckerberg and Ebersman also suggests another pro-competitive justification: bringing Instagram’s “social mechanics” to Facebook’s much larger network of users. We can only speculate about what ‘social mechanics’ Zuckerberg actually had in mind, but at the time Facebook’s photo sharing functionality was largely based around albums of unedited photos, whereas Instagram’s core product was a stream of filtered, cropped single images. 

Zuckerberg’s plan to gradually bring these features to Facebook’s users – as opposed to them having to familiarize themselves with an entirely different platform – would likely cut in favor of the deal being cleared by enforcers.

Another possibility is that it was Instagram’s network of creators – the people who had begun to use Instagram as a new medium, distinct from the generic photo albums Facebook had, and who would eventually grow to be known as ‘influencers’ – who were the valuable thing. Bringing them onto the Facebook platform would undoubtedly increase its value to regular users. For example, Kim Kardashian, one of Instagram’s most popular users, joined the service in February 2012, two months before the deal went through, and she was not the first such person to adopt Instagram in this way. We can see the importance of a service’s most creative users today, as Facebook is actually trying to pay TikTok creators to move to its TikTok clone Reels.

But if this was indeed the rationale, not only is this a sign of a company in the midst of fierce competition – rather than one on the cusp of acquiring a monopoly position – but, more fundamentally, it suggests that Facebook was always going to come out on top. Or at least it thought so.

The benefit of hindsight

Today’s commentators have the benefit of hindsight. This inherently biases contemporary takes on the Facebook/Instagram merger. For instance, it seems almost self-evident with hindsight that Facebook would succeed and that entry in the social media space would only occur at the fringes of existing platforms (the combined Facebook/Instagram platform) – think of the emergence of TikTok. However, at the time of the merger, such an outcome was anything but a foregone conclusion.

For instance, critics argue that Instagram no longer competes with Facebook because of the merger. However, it is equally plausible that Instagram only became so successful because of its combination with Facebook (notably thanks to the addition of Facebook’s advertising platform, and the rapid rollout of a stories feature in response to Snapchat’s rise). Indeed, Instagram grew from roughly 24 million at the time of the acquisition to over 1 Billion users in 2018. Likewise, it earned zero revenue at the time of the merger. This might explain why the acquisition was widely derided at the time.

This is critical from an antitrust perspective. Antitrust enforcers adjudicate merger proceedings in the face of extreme uncertainty. All possible outcomes, including the counterfactual setting, have certain probabilities of being true that enforcers and courts have to make educated guesses about, assigning probabilities to potential anticompetitive harms, merger efficiencies, and so on.

Authorities at the time of the merger could not ignore these uncertainties. What was the likelihood that a company with a fraction of Facebook’s users (24 million to Facebook’s 1 billion), and worth $1 billion, could grow to threaten Facebook’s market position? At the time, the answer seemed to be “very unlikely”. Moreover, how could authorities know that Google+ (Facebook’s strongest competitor at the time) would fail? These outcomes were not just hard to ascertain, they were simply unknowable.

Of course, this is preceisly what neo-Brandesian antitrust scholars object to today: among the many seemingly innocuous big tech acquisitions that are permitted each year, there is bound to be at least one acquired firm that might have been a future disruptor. True as this may be, identifying that one successful company among all the others is the antitrust equivalent of finding a needle in a haystack. Instagram simply did not fit that description at the time of the merger. Such a stance also ignores the very real benefits that may arise from such arrangements.

Closing remarks

While it is tempting to reassess the Facebook Instagram merger in light of new revelations, such an undertaking is not without pitfalls. Hindsight bias is perhaps the most obvious, but the difficulties run deeper.

If we think that the Facebook/Instagram merger has been and will continue to be good for consumers, it would be strange to think that we should nevertheless break them up because we discovered that Zuckerberg had intended to do things that would harm consumers. Conversely, if you think a breakup would be good for consumers today, would it change your mind if you discovered that Mark Zuckerberg had the intentions of an angel when he went ahead with the merger in 2012, or that he had angelic intent today?

Ultimately, merger review involves making predictions about the future. While it may be reasonable to take the intentions of the merging parties into consideration when making those predictions (although it’s not obvious that we should), these are not the only or best ways to determine what the future will hold. As Ebersman himself points out in the emails, history is filled with over-optimistic mergers that failed to deliver benefits to the merging parties. That this one succeeded beyond the wildest dreams of everyone involved – except maybe Mark Zuckerberg – does not tell us that competition agencies should have ruled on it differently.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Noah Phillips[1] (Commissioner of the U.S. Federal Trade Commission).]   

Never let a crisis go to waste, or so they say. In the past two weeks, some of the same people who sought to stop mergers and acquisitions during the bull market took the opportunity of the COVID-19 pandemic and the new bear market to call to ban M&A. On Friday, April 24th, Rep. David Cicilline proposed that a merger ban be included in the next COVID-19-related congressional legislative package.[2] By Monday, Senator Elizabeth Warren and Rep. Alexandria Ocasio-Cortez, warning of “predatory” M&A and private equity “vultures”, teamed up with a similar proposal.[3] 

I’m all for stopping anticompetitive M&A that we cannot resolve. In the past few months alone, the Federal Trade Commission has been quite busy, suing to stop transactions in the hospital, e-cigarette, coal, body-worn camera, razor, and gene sequencing industries, and forcing deals to stop in the pharmaceutical, medical staffing, and consumer products spaces. But is a blanket ban, unprecedented in our nation’s history, warranted, now? 

The theory that the pandemic requires the government to shut down M&A goes something like this: the antitrust agencies are overwhelmed and cannot do the job of reviewing mergers under the Hart-Scott-Rodino (HSR) Act, which gives the U.S. antitrust agencies advance notice of certain transactions and 30 days to decide whether to seek more information about them.[4] That state of affairs will, in turn, invite a rush of companies looking to merge with minimal oversight, exacerbating the problem by flooding the premerger notification office (PNO) with new filings. Another version holds, along similar lines, that the precipitous decline in the market will precipitate a merger “wave” in which “dominant corporations” and “private equity vultures” will gobble up defenseless small businesses. Net result: anticompetitive transactions go unnoticed and unchallenged. That’s the theory, at least as it has been explained to me. The facts are different.

First, while the restrictions related to COVID-19 require serious adjustments at the antitrust agencies just as they do at workplaces across the country (we’re working from home, dealing with remote technology, and handling kids just like the rest), merger review continues. Since we started teleworking, the FTC has, among other things, challenged Altria’s $12.8 billion investment in JUUL’s e-cigarette business and resolved competitive concerns with GE’s sale of its biopharmaceutical business to Danaher and Ossur’s acquisition of a competing prosthetic limbs manufacturer, College Park. With our colleagues at the Antitrust Division of the Department of Justice, we announced a new e-filing system for HSR filings and temporarily suspended granting early termination. We sought voluntary extensions from companies. But, in less than two weeks, we were able to resume early termination—back to “new normal”, at least. I anticipate there may be additional challenges; and the FTC will assess constraints in real-time to deal with further disruptions. But we have not sacrificed the thoroughness of our investigations; and we will not.

Second, there is no evidence of a merger “wave”, or that the PNO is overwhelmed with HSR filings. To the contrary, according to Bloomberg, monthly M&A volume hit rock bottom in April – the lowest since 2004. As of last week, the PNO estimates nearly 60% reduction in HSR reported transactions during the past month, compared to the historical average. Press reports indicate that M&A activity is down dramatically because of the crisis. Xerox recently announced it was suspending its hostile bid for Hewlett-Packard ($30 billion); private equity firm Sycamore Partners announced it is walking away from its takeover of Victoria’s Secret ($525 million); and Boeing announced it is backing out of its merger with Embraer ($4.2 billion) — just a few examples of companies, large corporations and private equity firms alike, stopping M&A on their own. (The market is funny like that.)

Slowed M&A during a global pandemic and economic crisis is exactly what you would expect. The financial uncertainty facing companies lowers shareholder and board confidence to dive into a new acquisition or sale. Financing is harder to secure. Due diligence is postponed. Management meetings are cancelled. Agreeing on price is another big challenge. The volatility in stock prices makes valuation difficult, and lessens the value of equity used to acquire. Cash is needed elsewhere, like to pay workers and keep operations running. Lack of access to factories and other assets as a result of travel restrictions and stay-at-home orders similarly make valuation harder. Management can’t even get in a room to negotiate and hammer out the deal because of social distancing (driving a hard bargain on Zoom may not be the same).

Experience bears out those expectations. Consider our last bear market, the financial crisis that took place over a decade ago. Publicly available FTC data show the number of HSR reported transactions dropped off a cliff. During fiscal year 2009, the height of the crisis, HSR reported transactions were down nearly 70% compared to just two years earlier, in fiscal year 2007. Not surprising.

Source: https://www.ftc.gov/site-information/open-government/data-sets

Nor should it be surprising that the current crisis, with all its uncertainty and novelty, appears itself to be slowing down M&A.

So, the antitrust agencies are continuing merger review, and adjusting quickly to the new normal. M&A activity is down, dramatically, on its own. That makes the pandemic an odd excuse to stop M&A. Maybe the concern wasn’t really about the pandemic in the first place? The difference in perspective may depend on one’s general view of the value of M&A. If you think mergers are mostly (or all) bad, and you discount the importance of the market for corporate control, the cost to stopping them all is low. If you don’t, the cost is high.[5]

As a general matter, decades of research and experience tell us that the vast majority of mergers are either pro-competitive or competitively-neutral.[6] But M&A, even dramatically-reduced, also has an important role to play in a moment of economic adjustment. It helps allocate assets in an efficient manner, for example giving those with the wherewithal to operate resources (think companies, or plants) an opportunity that others may be unable to utilize. Consumers benefit if a merger leads to the delivery of products or services that one company could not efficiently provide on its own, and from the innovation and lower prices that better management and integration can provide. Workers benefit, too, as they remain employed by going concerns.[7] It serves no good, including for competition, to let companies that might live, die.[8]

M&A is not the only way in which market forces can help. The antitrust agencies have always recognized pro-competitive benefits to collaboration between competitors during times of crisis.  In 2005, after hurricanes Katrina and Rita, we implemented an expedited five-day review of joint projects between competitors aimed at relief and construction. In 2017, after hurricanes Harvey and Irma, we advised that hospitals could combine resources to meet the health care needs of affected communities and companies could combine distribution networks to ensure goods and services were available. Most recently, in response to the current COVID-19 emergency, we announced an expedited review process for joint ventures. Collaboration can be concerning, so we’re reviewing; but it can also help.

Our nation is going through an unprecedented national crisis, with a horrible economic component that is putting tens of millions out of work and causing a great deal of suffering. Now is a time of great uncertainty, tragedy, and loss; but also of continued hope and solidarity. While merger review is not the top-of-mind issue for many—and it shouldn’t be—American consumers stand to gain from pro-competitive mergers, during and after the current crisis. Those benefits would be wiped out with a draconian ‘no mergers’ policy during the COVID-19 emergency. Might there be anticompetitive merger activity? Of course, which is why FTC staff are working hard to vet potentially anticompetitive mergers and prevent harm to consumers. Let’s let them keep doing their jobs.


[1] The views expressed in this blog post are my own and do not necessarily reflect the views of the Federal Trade Commission or any other commissioner. An abbreviated version of this essay was previously published in the New York Times’ DealBook newsletter. Noah Phillips, The case against banning mergers, N.Y. Times, Apr. 27, 2020, available at https://www.nytimes.com/2020/04/27/business/dealbook/small-business-ppp-loans.html.

[2] The proposal would allow transactions only if a company is already in bankruptcy or is otherwise about to fail.

[3] The “Pandemic Anti-Monopoly Act” proposes a merger moratorium on (1) firms with over $100 million in revenue or market capitalization of over $100 million; (2) PE firms and hedge funds (or entities that are majority-owned by them); (3) businesses that have an exclusive patent on products related to the crisis, such as personal protective equipment; and (4) all HSR reportable transactions.

[4] Hart-Scott-Rodino Antitrust Improvements Act of 1976, 15 U.S.C. § 18a. The antitrust agencies can challenge transactions after they happen, but they are easier to stop beforehand; and Congress designed HSR to give us an opportunity to do so.

[5] Whatever your view, the point is that the COVID-19 crisis doesn’t make sense as a justification for banning M&A. If ban proponents oppose M&A generally, they should come out and say that. And they should level with the public about just how much they propose to ban. The specifics of the proposals are beyond the scope of this essay, but it’s worth noting that the “large companies [gobbling] up . . . small businesses” of which Sen. Warren warns include any firm with $100 million in annual revenue and anyone making a transaction reportable under HSR. $100 million seems like a lot of money to many of us, but the Ohio State University National Center for the Middle Market defines a mid-sized company as having annual revenues between $10 million and $1 billion. Many if not most of the transactions that would be banned look nothing like the kind of acquisitions ban proponents are describing.

[6] As far back as the 1980s, the Horizontal Merger Guidelines reflected this idea, stating: “While challenging competitively harmful mergers, the Department [of Justice Antitrust Division] seeks to avoid unnecessary interference with the larger universe of mergers that are either competitively beneficial or neutral.” Horizontal Merger Guidelines (1982); see also Hovenkamp, Appraising Merger Efficiencies, 24 Geo. Mason L. Rev. 703, 704 (2017) (“we tolerate most mergers because of a background, highly generalized belief that most—or at least many—do produce cost savings or improvements in products, services, or distribution”); Andrade, Mitchell & Stafford, New Evidence and Perspectives on Mergers, 15 J. ECON. PERSPECTIVES 103, 117 (2001) (“We are inclined to defend the traditional view that mergers improve efficiency and that the gains to shareholders at merger announcement accurately reflect improved expectations of future cash flow performance.”).

[7] Jointly with our colleagues at the Antitrust Division of the Department of Justice, we issued a statement last week affirming our commitment to enforcing the antitrust laws against those who seek to exploit the pandemic to engage in anticompetitive conduct in labor markets.

[8] The legal test to make such a showing for an anti-competitive transaction is high. Known as the “failing firm defense”, it is available only to firms that can demonstrate their fundamental inability to compete effectively in the future. The Horizontal Merger Guidelines set forth three elements to establish the defense: (1) the allegedly failing firm would be unable to meet its financial obligations in the near future; (2) it would not be able to reorganize successfully under Chapter 11; and (3) it has made unsuccessful good-faith efforts to elicit reasonable alternative offers that would keep its tangible and intangible assets in the relevant market and pose a less severe danger to competition than the actual merger. Horizontal Merger Guidelines § 11; see also Citizen Publ’g v. United States, 394 U.S. 131, 137-38 (1969). The proponent of the failing firm defense bears the burden to prove each element, and failure to prove a single element is fatal. In re Otto Bock, FTC No. 171-0231, Docket No. 9378 Commission Opinion (Nov. 2019) at 43; see also Citizen Publ’g, 394 U.S. at 138-39.

This guest post is by Jonathan M. Barnett, Torrey H. Webb Professor Law, University of Southern California Gould School of Law.

It has become virtual received wisdom that antitrust law has been subdued by economic analysis into a state of chronic underenforcement. Following this line of thinking, many commentators applauded the Antitrust Division’s unsuccessful campaign to oppose the acquisition of Time-Warner by AT&T and some (unsuccessfully) urged the Division to take stronger action against the acquisition of most of Fox by Disney. The arguments in both cases followed a similar “big is bad” logic. Consolidating control of a large portfolio of creative properties (Fox plus Disney) or integrating content production and distribution capacities (Time-Warner plus AT&T) would exacerbate market concentration, leading to reduced competition and some combination of higher prices and reduced product for consumers. 

Less than 18 months after the closing of both transactions, those concerns seem to have been largely unwarranted. 

Far from precipitating any decline in product output or variety, both transactions have been followed by a vigorous burst of competition in the digital streaming market. In place of the Amazon plus Netflix bottleneck (with Hulu trailing behind), consumers now, or in 2020 will, have a choice of at least four new streaming services with original content, Disney+, AT&T’s “HBO Max”, Apple’s “Apple TV+” and Comcast’s NBCUniversal “Peacock” services. Critically, each service relies on a formidable combination of creative, financing and technological capacities that can only be delivered by a firm of sufficiently large size and scale.  As modern antitrust law has long recognized, it turns out that “big” is sometimes not bad.

Where’s the Harm?

At present, it is hard to see any net consumer harm arising from the concurrence of increased size and increased competition. 

On the supply side, this is just the next episode in the ongoing “Golden Age of Television” in which content producers have enjoyed access to exceptional funding to support high-value productions.  It has been reported that Apple TV+’s new “Morning Show” series will cost $15 million per episode while similar estimates are reported for hit shows such as HBO’s “Game of Thrones” and Netflix’s “The Crown.”  Each of those services is locked in a fierce competition to gain and retain sufficient subscribers to earn a return on those investments, which leads directly to the next happy development.

On the demand side, consumers enjoy a proliferating array of streaming services, ranging from free ad-supported services to subscription ad-free services. Consumers can now easily “cut the cord” and assemble a customized bundle of preferred content from multiple services, each of which is less costly than a traditional cable package and can generally be cancelled at any time.  Current market performance does not plausibly conform to the declining output, limited variety or increasing prices that are the telltale symptoms of a less than competitive market.

Real-World v. Theoretical Markets

The market’s favorable trajectory following these two controversial transactions should not be surprising. When scrutinized against the actual characteristics of real-world digital content markets, rather than stylized theoretical models or antiquated pre-digital content markets, the arguments leveled against these transactions never made much sense. There were two fundamental and related errors. 

Error #1: Content is Scarce

Advocates for antitrust intervention assumed that entry barriers into the content market were high, in which case it followed that the owner of an especially valuable creative portfolio could exert pricing power to consumers’ detriment. Yet, in reality, funding for content production is plentiful and even a service that has an especially popular show is unlikely to have sustained pricing power in the face of a continuous flow of high-value productions being released by formidable competitors. The amounts being spent on content in 2019 by leading streaming services are unprecedented, ranging from a reported $15 billion for Netflix to an estimated $6 billion for Amazon and Apple TV+ to an estimated $3.9 billion for AT&T’s HBO Max. It is also important to note that a hit show is often a mobile asset that a streaming or other video distribution service has licensed from independent production companies and other rights holders. Once the existing deal expires, those rights are available for purchase by the highest bidder. For example, in 2019, Netflix purchased the streaming rights to “Seinfeld”, Viacom purchased the cable rights to “Seinfeld”, and HBO Max purchased the streaming rights to “South Park.” Similarly, the producers behind a hit show are always free to take their talents to competitors once any existing agreement terminates.

Error #2: Home Pay-TV is a “Monopoly”

Advocates of antitrust action were looking at the wrong market—or more precisely, the market as it existed about a decade ago. The theory that AT&T’s acquisition of Time-Warner’s creative portfolio would translate into pricing power in the home pay-TV market mighthave been plausible when consumers had no reasonable alternative to the local cable provider. But this argument makes little sense today when consumers are fleeing bulky home pay-TV bundles for cheaper cord-cutting options that deliver more targeted content packages to a mobile device.  In 2019, a “home” pay-TV market is fast becoming an anachronism and hence a home pay-TV “monopoly” largely reduces to a formalism that, with the possible exception of certain live programming, is unlikely to translate into meaningful pricing power. 

Wait a Second! What About the HBO Blackout?

A skeptical reader might reasonably object that this mostly rosy account of the post-merger home video market is unpersuasive since it does not address the ongoing blackout of HBO (now an AT&T property) on the Dish satellite TV service. Post-merger commentary that remains skeptical of the AT&T/Time-Warner merger has focused on this dispute, arguing that it “proves” that the government was right since AT&T is purportedly leveraging its new ownership of HBO to disadvantage one of its competitors in the pay-TV market. This interpretation tends to miss the forest for the trees (or more precisely, a tree).  

The AT&T/Dish dispute over HBO is only one of over 200 “carriage” disputes resulting in blackouts that have occurred this year, which continues an upward trend since approximately 2011. Some of those include Dish’s dispute with Univision (settled in March 2019 after a nine-month blackout) and AT&T’s dispute (as pay-TV provider) with Nexstar (settled in August 2019 after a nearly two-month blackout). These disputes reflect the fact that the flood of subscriber defections from traditional pay-TV to mobile streaming has made it difficult for pay-TV providers to pass on the fees sought by content owners. As a result, some pay-TV providers adopt the negotiating tactic of choosing to drop certain content until the terms improve, just as AT&T, in its capacity as a pay-TV provider, dropped CBS for three weeks in July and August 2019 pending renegotiation of licensing terms. It is the outward shift in the boundaries of the economically relevant market (from home to home-plus-mobile video delivery), rather than market power concerns, that best accounts for periodic breakdowns in licensing negotiations.  This might even be viewed positively from an antitrust perspective since it suggests that the “over the top” market is putting pressure on the fees that content owners can extract from providers in the traditional pay-TV market.

Concluding Thoughts

It is common to argue today that antitrust law has become excessively concerned about “false positives”– that is, the possibility of blocking a transaction or enjoining a practice that would have benefited consumers. Pending future developments, this early post-mortem on the regulatory and judicial treatment of these two landmark media transactions suggests that there are sometimes good reasons to stay the hand of the court or regulator. This is especially the case when a generational market shift is in progress and any regulator’s or judge’s foresight is likely to be guesswork. Antitrust law’s “failure” to stop these transactions may turn out to have been a ringing success.

Will the merger between T-Mobile and Sprint make consumers better or worse off? A central question in the review of this merger—as it is in all merger reviews—is the likely effects that the transaction will have on consumers. In this post, we look at one study that opponents of the merger have been using to support their claim that the merger will harm consumers.

Along with my earlier posts on data problems and public policy (1, 2, 3, 4, 5), this provides an opportunity to explore why seemingly compelling studies can be used to muddy the discussion and fool observers into seeing something that isn’t there.

This merger—between the third and fourth largest mobile wireless providers in the United States—has been characterized as a “4-to-3” merger, on the grounds that it will reduce the number of large, ostensibly national carriers from four to three. This, in turn, has led to concerns that further concentration in the wireless telecommunications industry will harm consumers. Specifically, some opponents of the merger claim that “it’s going to be hard for someone to make a persuasive case that reducing four firms to three is actually going to improve competition for the benefit of American consumers.”

A number of previous mergers around the world can or have also been characterized as 4-to-3 mergers in the wireless telecommunications industry. Several econometric studies have attempted to evaluate the welfare effects of 4-to-3 mergers in other countries, as well as the effects of market concentration in the wireless industry more generally. These studies have been used by both proponents and opponents of the proposed merger of T-Mobile and Sprint to support their respective contentions that the merger will benefit or harm consumer welfare.

One particular study has risen to prominence among opponents of 4-to-3 mergers in telecom in general and the T-Mobile/Sprint merger in specific. This is worrying because the study has several fundamental flaws. 

This study, by Finnish consultancy Rewheel, has been cited by, among others, Phillip Berenbroick of Public Knowledge, who in Senate testimony, asserted that “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.”

The Rewheel report upon which Mr. Berenbroick relied, is, however, marred by a number of significant flaws, which undermine its usefulness.

The Rewheel report

Rewheel’s report purports to analyze the state of 4G pricing across 41 countries that are either members of the EU or the OECD or both. The report’s conclusions are based mainly on two measures:

  1. Estimates of the maximum number of gigabytes available under each plan for a specific hypothetical monthly price, ranging from €5 to €80 a month. In other words, for each plan, Rewheel asks, “How many 4G gigabytes would X euros buy?” Rewheel then ranks countries by the median amount of gigabytes available at each hypothetical price for all the plans surveyed in each country.
  2. Estimates of what Rewheel describes as “fully allocated gigabyte prices.” This is the monthly retail price (including VAT) divided by the number of gigabytes included in each plan. Rewheel then ranks countries by the median price per gigabyte across all the plans surveyed in each country.

Rewheel’s convoluted calculations

Rewheel’s use of the country median across all plans is problematic. In particular it gives all plans equal weight, regardless of consumers’ use of each plan. For example, a plan targeted for a consumer with a “high” level of usage is included with a plan targeted for a consumer with a “low” level of usage. Even though a “high” user would not purchase a “low” plan (which would be relatively expensive for a “high” user), all plans are included, thereby skewing upward the median estimates.

But even if that approach made sense as a way of measuring consumers’ willingness to pay, in execution Rewheel’s analysis contains the following key defects:

  • The Rewheel report is essentially limited to quantity effects alone (i.e., how many gigabytes available under each plan for a given hypothetical price) or price effects alone (i.e., price per included gigabyte for each plan). These measures can mislead the analysis by missing, among other things, innovation and quality effects.
  • Rewheel’s analysis is not based on an impartial assessment of relevant price data. Rather, it is based on hypothetical measures. Such comparisons say nothing about the plans actually chosen by consumers or the actual prices paid by consumers in those countries, rendering Rewheel’s comparisons virtually meaningless. As Affeldt & Nitsche (2014) note in their assessment of the effects of concentration in mobile telecom markets:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr (when tracking prices over time, see rtr (2014)). Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

  • The Rewheel report bases its comparison on dissimilar service levels by not taking into account, for instance, relevant features like comparable network capacity, service security, and, perhaps most important, overall quality of service.

Rewheel’s unsupported conclusions

Rewheel uses its analysis to come to some strong conclusions, such as the conclusion on the first page of its report declaring the median gigabyte price in countries with three carriers is twice as high as in countries with four carriers.

The figure below is a revised version of the figure on the first page of Rewheel’s report. The yellow blocks (gray dots) show the range of prices in countries with three carriers the blue blocks (pink dots) shows the range of prices in countries with four carriers. The darker blocks show the overlap of the two. The figure makes clear that there is substantial overlap in pricing among three and four carrier countries. Thus, it is not obvious that three carrier countries have significantly higher prices (as measured by Rewheel) than four carrier countries.

Rewheel

A simple “eyeballing” of the data can lead to incorrect conclusions, in which case statistical analysis can provide some more certainty (or, at least, some measure of uncertainty). Yet, Rewheel provides no statistical analysis of its calculations, such as measures of statistical significance. However, information on page 5 of the Rewheel report can be used to perform some rudimentary statistical analysis.

I took the information from the columns for hypothetical monthly prices of €30 a month and €50 a month, and converted data into a price per gigabyte to generate the dependent variable. Following Rewheel’s assumption, “unlimited” is converted to 250 gigabytes per month. Greece was dropped from the analysis because Rewheel indicates that no data is available at either hypothetical price level.

My rudimentary statistical analysis includes the following independent variables:

  • Number of carriers (or mobile network operators, MNOs) reported by Rewheel in each country, ranging from three to five. Israel is the only country with five MNOs.
  • A dummy variable for EU28 countries. Rewheel performs separate analysis for EU28 countries, suggesting they think this is an important distinction.
  • GDP per capita for each country, adjusted for purchasing power parity. Several articles in the literature suggest higher GDP countries would be expected to have higher wireless prices.
  • Population density, measured by persons per square kilometer. Several articles in the literature argue that countries with lower population density would have higher costs of providing wireless service which would, in turn, be reflected in higher prices.

The tables below confirm what an eyeballing of the figure suggest: Rewheel’s data show number of MNOs in a country have no statistically significant relationship with price per gigabyte, at either the €30 a month level or the €50 a month level.

RewheelRegression

While the signs on the MNO coefficient are negative (i.e., more carriers in a country is associated with lower prices), they are not statistically significantly different from zero at any of the traditional levels of statistical significance.

Also, the regressions suffer from relatively low measures of goodness-of-fit. The independent variables in the regression explain approximately five percent of the variation in the price per gigabyte. This is likely because of the cockamamie way Rewheel measures price, but is also due to the known problems with performing cross-sectional analysis of wireless pricing, as noted by Csorba & Pápai (2015):

Many regulatory policies are based on a comparison of prices between European countries, but these simple cross-sectional analyses can lead to misleading conclusions because of at least two reasons. First, the price difference between countries of n and (n + 1) active mobile operators can be due to other factors, and the analyst can never be sure of having solved the omitted variable bias problem. Second and more importantly, the effect of an additional operator estimated from a cross-sectional comparison cannot be equated with the effect of an actual entry that might have a long-lasting effect on a single market.

The Rewheel report cannot be relied upon in assessing consumer benefits or harm associated with the T-Mobile/Sprint merger, or any other merger

Rewheel apparently has a rich dataset of wireless pricing plans. Nevertheless, the analyses presented in its report are fundamentally flawed. Moreover, Rewheel’s conclusions regarding three vs. four carrier countries are not only baseless, but clearly unsupported by closer inspection of the information presented in its report. The Rewheel report cannot be relied upon to inform regulatory oversight of the T-Mobile/Spring merger or any other. This study isn’t unique and it should serve as a caution to be wary of studies that merely eyeball information.

Yesterday Learfield and IMG College inked their recently announced merger. Since the negotiations were made public several weeks ago, the deal has garnered some wild speculation and potentially negative attention. Now that the merger has been announced, it’s bound to attract even more attention and conjecture.

On the field of competition, however, the market realities that support the merger’s approval are compelling. And, more importantly, the features of this merger provide critical lessons on market definition, barriers to entry, and other aspects of antitrust law related to two-sided and advertising markets that can be applied to numerous matters vexing competition commentators.

First, some background

Learfield and IMG specialize in managing multimedia rights (MMRs) for intercollegiate sports. They are, in effect, classic advertising intermediaries, facilitating the monetization by colleges of radio broadcast advertising and billboard, program, and scoreboard space during games (among other things), and the purchase by advertisers of access to these valuable outlets.

Although these transactions can certainly be (and very often are) entered into by colleges and advertisers directly, firms like Learfield and IMG allow colleges to outsource the process — as one firm’s tag line puts it, “We Work | You Play.” Most important, by bringing multiple schools’ MMRs under one roof, these firms can reduce the transaction costs borne by advertisers in accessing multiple outlets as part of a broad-based marketing plan.

Media rights and branding are a notable source of revenue for collegiate athletic departments: on average, they account for about 3% of these revenues. While they tend to pale in comparison to TV rights, ticket sales, and fundraising, for major programs, MMRs may be the next most important revenue source after these.

Many collegiate programs retain some or all of their multimedia rights and use in-house resources to market them. In some cases schools license MMRs through their athletic conference. In other cases, schools ink deals to outsource their MMRs to third parties, such as Learfield, IMG, JMI Sports, Outfront Media, and Fox Sports, among several others. A few schools even use professional sports teams to manage their MMRs (the owner of the Red Sox manages Boston College’s MMRs, for example).

Schools switch among MMR managers with some regularity, and, in most cases apparently, not among the merging parties. Michigan State, for example, was well known for handling its MMRs in-house. But in 2016 the school entered into a 15-year deal with Fox Sports, estimated at minimum guaranteed $150 million. In 2014 Arizona State terminated its MMR deal with IMG and took it MMRs in-house. Then, in 2016, the Sun Devils entered into a first-of-its-kind arrangement with the Pac 12 in which the school manages and sells its own marketing and media rights while the conference handles core business functions for the sales and marketing team (like payroll, accounting, human resources, and employee benefits). The most successful new entrant on the block, JMI Sports, won Kentucky, Clemson, and the University of Pennsylvania from Learfield or IMG. Outfront Media was spun off from CBS in 2014 and has become one of the strongest MMR intermediary competitors, handling some of the biggest names in college sports, including LSU, Maryland, and Virginia. All told, eight recent national Division I champions are served by MMR managers other than IMG and Learfield.

The supposed problem

As noted above, the most obvious pro-competitive benefit of the merger is in the reduction in transaction costs for firms looking to advertise in multiple markets. But, in order to confer that benefit (which, of course, also benefits the schools, whose marketing properties become easier to access), that also means a dreaded increase in size, measured by number of schools’ MMRs managed. So is this cause for concern?

Jason Belzer, a professor at Rutgers University and founder of sports consulting firm, GAME, Inc., has said that the merger will create a juggernaut — yes, “a massive inexorable force… that crushes whatever is in its path” — that is likely to invite antitrust scrutiny. The New York Times opines that the deal will allow Learfield to “tighten its grip — for nearly total control — on this niche but robust market,” “surely” attracting antitrust scrutiny. But these assessments seem dramatically overblown, and insufficiently grounded in the dynamics of the market.

Belzer’s concerns seem to be merely the size of the merging parties — again, measured by the number of schools’ rights they manage — and speculation that the merger would bring to an end “any” opportunity for entry by a “major” competitor. These are misguided concerns.

To begin, the focus on the potential entry of a “major” competitor is an odd standard that ignores the actual and potential entry of many smaller competitors that are able to win some of the most prestigious and biggest schools. In fact, many in the industry argue — rightly — that there are few economies of scale for colleges. Most of these firms’ employees are dedicated to a particular school and those costs must be incurred for each school, no matter the number, and borne by new entrants and incumbents alike. That means a small firm can profitably compete in the same market as larger firms — even “juggernauts.” Indeed, every college that brings MMR management in-house is, in fact, an entrant — and there are some big schools in big conferences that manage their MMRs in-house.

The demonstrated entry of new competitors and the transitions of schools from one provider to another or to in-house MMR management indicate that no competitor has any measurable market power that can disadvantage schools or advertisers.

Indeed, from the perspective of the school, the true relevant market is no broader than each school’s own rights. Even after the merger there will be at least five significant firms competing for those rights, not to mention each school’s conference, new entrants, and the school itself.

The two-sided market that isn’t really two-sided

Standard antitrust analysis, of course, focuses on consumer benefits: Will the merger make consumers better off (or no worse off)? But too often casual antitrust analysis of two-sided markets trips up on identifying just who the consumer is — and what the relevant market is. For a shopping mall, is the consumer the retailer or the shopper? For newspapers and search engines, is the customer the advertiser or the reader? For intercollegiate sports multimedia rights licensing, is the consumer the college or the advertiser?

Media coverage of the anticipated IMG/Learfield merger largely ignores advertisers as consumers and focuses almost exclusively on the the schools’ relationship with intermediaries — as purchasers of marketing services, rather than sellers of advertising space.

Although it’s difficult to identify the source of this odd bias, it seems to be based on the notion that, while corporations like Coca-Cola and General Motors have some sort of countervailing market power against marketing intermediaries, universities don’t. With advertisers out of the picture, media coverage suggests that, somehow, schools may be worse off if the merger were to proceed. But missing from this assessment are two crucial facts that undermine the story: First, schools actually have enormous market power; and, second, schools compete in the business of MMR management.

This second factor suggests, in fact, that sometimes there may be nothing special about two-sided markets sufficient to give rise to a unique style of antitrust analysis.

Much of the antitrust confusion seems to be based on confusion over the behavior of two-sided markets. A two-sided market is one in which two sets of actors interact through an intermediary or platform, which, in turn, facilitates the transactions, often enabling transactions to take place that otherwise would be too expensive absent the platform. A shopping mall is a two-sided market where shoppers can find their preferred stores. Stores would operate without the platform, but perhaps not as many, and not as efficiently. Newspapers, search engines, and other online platforms are two-sided markets that bring together advertisers and eyeballs that might not otherwise find each other absent the platform. And a collegiate multimedia rights management firms is a two-sided market where colleges that want to sell advertising space get together with firms that want to advertise their goods and services.

Yet there is nothing particularly “transformative” about the outsourcing of MMR management. Credit cards, for example are qualitatively different than in-store credit operations. They are two-sided platforms that substitute for in-house operations — but they also create an entirely new product and product market. MMR marketing firms do lower some transaction costs and reduce risk for collegiate sports marketing, but the product is not substantially changed — in fact, schools must have the knowledge and personnel to assess and enter into the initial sale of MMRs to an intermediary and, because of ongoing revenue-sharing and coordination with the intermediary, must devote ongoing resources even after the initial sale.

But will a merged entity have “too much” power? Imagine if a single firm owned the MMRs for nearly all intercollegiate competitors. How would it be able to exercise its supposed market power? Because each deal is negotiated separately, and, other than some mundane, fixed back-office expenses, the costs of rights management must be incurred whether a firm negotiates one deal or 100, there are no substantial economies of scale in the purchasing of MMRs. As a result, the existence of deals with other schools won’t automatically translate into better deals with subsequent schools.

Now, imagine if one school retained its own MMRs, but decided it might want to license them to an intermediary. Does it face anticompetitive market conditions if there is only a single provider of such services? To begin with, there is never only a single provider, as each school can provide the services in-house. This is not even the traditional monopoly constraint of simply “not buying,” which makes up the textbook “deadweight loss” from monopoly: In this case “not buying” does not mean going without; it simply means providing for oneself.

More importantly, because the school has a monopoly on access to its own marketing rights (to say nothing of access to its own physical facilities) unless and until it licenses them, its own bargaining power is largely independent of an intermediary’s access to other schools’ rights. If it were otherwise, each school would face anticompetitive market conditions simply by virtue of other schools’ owning their own rights!

It is possible that a larger, older firm will have more expertise and will be better able to negotiate deals with other schools — i.e., it will reap the benefits of learning by doing. But the returns to learning by doing derive from the ability to offer higher-quality/lower-cost services over time — which are a source of economic benefit, not cost. At the same time, the bulk of the benefits of experience may be gained over time with even a single set of MMRs, given the ever-varying range of circumstances even a single school will create: There may be little additional benefit (and, to be sure, there is additional cost) from managing multiple schools’ MMRs. And whatever benefits specialized firms offer, they also come with agency costs, and an intermediary’s specialized knowledge about marketing MMRs may or may not outweigh a school’s own specialized knowledge about the nuances of its particular circumstances. Moreover, because of knowledge spillovers and employee turnover this marketing expertise is actually widely distributed; not surprisingly, JMI Sports’ MMR unit, one of the most recent and successful entrants into the business was started by a former employee of IMG. Several other firms started out the same way.

The right way to begin thinking about the issue is this: Imagine if MMR intermediaries didn’t exist — what would happen? In this case, the answer is readily apparent because, for a significant number of schools (about 37% of Division I schools, in fact) MMR licensing is handled in-house, without the use of intermediaries. These schools do, in fact, attract advertisers, and there is little indication that they earn less net profit for going it alone. Schools with larger audiences, better targeted to certain advertisers’ products, command higher prices. Each school enjoys an effective monopoly over advertising channels around its own games, and each has bargaining power derived from its particular attractiveness to particular advertisers.

In effect, each school faces a number of possible options for MMR monetization — most notably a) up-front contracting to an intermediary, which then absorbs the risk, expense, and possible up-side of ongoing licensing to advertisers, or b) direct, ongoing licensing to advertisers. The presence of the intermediary doesn’t appreciably change the market, nor the relative bargaining power of sellers (schools) and buyers (advertisers) of advertising space any more than the presence of temp firms transforms the fundamental relationship between employers and potential part-time employees.

In making their decisions, schools always have the option of taking their MMR management in-house. In facing competing bids from firms such as IMG or Learfield, from their own conferences, or from professional sports teams, the opening bid, in a sense, comes from the school itself. Even the biggest intermediary in the industry must offer the school a deal that is at least as good as managing the MMRs in-house.

The true relevant market: Advertising

According to economist Andy Schwarz, if the relevant market is “college-based marketing services to Power 5 schools, the antitrust authorities may have more concerns than if it’s marketing services in sports.” But this entirely misses the real market exchange here. Sure, marketing services are purchased by schools, but their value to the schools is independent of the number of other schools an intermediary also markets.

Advertisers always have the option of deploying their ad dollars elsewhere. If Coca-Cola wants to advertise on Auburn’s stadium video board, it’s because Auburn’s video board is a profitable outlet for advertising, not because the Auburn ads are bundled with advertising at dozens of other schools (although that bundling may reduce the total cost of advertising on Auburn’s scoreboard as well as other outlets). Similarly, Auburn is seeking the highest bidder for space on its video board. It does not matter to Auburn that the University of Georgia is using the same intermediary to sell ads on its stadium video board.

The willingness of purchasers — say, Coca-Cola or Toyota — to pay for collegiate multimedia advertising is a function of the school that licenses it (net transaction costs) — and MMR agents like IMG and Learfield commit substantial guaranteed sums and a share of any additional profits for the rights to sell that advertising: For example, IMG recently agreed to pay $150 million over 10 years to renew its MMR contract at UCLA. But this is the value of a particular, niche form of advertising, determined within the context of the broader advertising market. How much pricing power over scoreboard advertising does any university, or even any group of universities under the umbrella of an intermediary have, in a world in which Coke and Toyota can advertise virtually anywhere — including during commercial breaks in televised intercollegiate games, which are licensed separately from the MMRs licensed by companies like IMG and Learfield?

There is, in other words, a hard ceiling on what intermediaries can charge schools for MMR marketing services: The schools’ own cost of operating a comparable program in-house.

To be sure, for advertisers, large MMR marketing firms lower the transaction costs of buying advertising space across a range of schools, presumably increasing demand for intercollegiate sports advertising and sponsorship. But sponsors and advertisers have a wide range of options for spending their marketing dollars. Intercollegiate sports MMRs are a small slice of the sports advertising market, which, in turn, is a small slice of the total advertising market. Even if one were to incorrectly describe the combined entity as a “juggernaut” in intercollegiate sports, the MMR rights it sells would still be a flyspeck in the broader market of multimedia advertising.

According to one calculation (by MoffettNathanson), total ad spending in the U.S. was about $191 billion in 2016 (Pew Research Center estimates total ad revenue at $240 billion) and the global advertising market was estimated to be worth about $493 billion. The intercollegiate MMR segment represents a minuscule fraction of that. According to Jason Belzer, “[a]t the time of its sale to WME in 2013, IMG College’s yearly revenue was nearly $500 million….” Another source puts it at $375 million. Either way, it’s a fraction of one percent of the total market, and even combined with Learfield it will remain a minuscule fraction. Even if one were to define a far narrower sports sponsorship market, which a Price Waterhouse estimate puts at around $16 billion, the combined companies would still have a tiny market share.

As sellers of MMRs, colleges are competing with each other, professional sports such as the NFL and NBA, and with non-sports marketing opportunities. And it’s a huge and competitive market.

Barriers to entry

While capital requirements and the presence of long-term contracts may present challenges to potential entrants into the business of marketing MMRs, these potential entrants face virtually no barriers that are not, or have not been, faced by incumbent providers. In this context, one should keep in mind two factors. First, barriers to entry are properly defined as costs incurred by new entrants that are not incurred by incumbents (no matter what Joe Bain says; Stigler always wins this dispute…). Every firm must bear the cost of negotiating and managing each schools’ MMRs, and, as noted, these costs don’t vary significantly with the number of schools being managed. And every entrant needs approximately the same capital and human resources per similarly sized school as every incumbent. Thus, in this context, neither the need for capital nor dedicated employees is properly construed as a barrier to entry.

Second, as the DOJ and FTC acknowledge in the Horizontal Merger Guidelines, any merger can be lawful under the antitrust laws, no matter its market share, where there are no significant barriers to entry:

The prospect of entry into the relevant market will alleviate concerns about adverse competitive effects… if entry into the market is so easy that the merged firm and its remaining rivals in the market, either unilaterally or collectively, could not profitably raise price or otherwise reduce competition compared to the level that would prevail in the absence of the merger.

As noted, there are low economies of scale in the business, with most of the economies occurring in the relatively small “back office” work of payroll, accounting, human resources, and employee benefits. Since the 2000s, the entry of several significant competitors — many entering with only one or two schools or specializing in smaller or niche markets — strongly suggests that there are no economically important barriers to entry. And these firms have entered and succeeded with a wide range of business models and firm sizes:

  • JMI Sports — a “rising boutique firm” — hired Tom Stultz, the former senior vice president and managing director of IMG’s MMR business, in 2012. JMI won its first (and thus, at the time, only) MMR bid in 2014 at the University of Kentucky, besting IMG to win the deal.
  • Peak Sports MGMT, founded in 2012, is a small-scale MMR firm that focuses on lesser Division I and II schools in Texas and the Midwest. It manages just seven small properties, including Southland Conference schools like the University of Central Arkansas and Southeastern Louisiana University.
  • Fox Sports entered the business in 2008 with a deal with the University of Florida. It now handles MMRs for schools like Georgetown, Auburn, and Villanova. Fox’s entry suggests that other media companies — like ESPN — that may already own TV broadcast rights are also potential entrants.
  • In 2014 the sports advertising firm, Van Wagner, hired three former Nelligan employees to make a play for the college sports space. In 2015 the company won its first MMR bid at Florida International University, reportedly against seven other participants. It now handles more than a dozen schools including Georgia State (which it won from IMG), Loyola Marymount, Pepperdine, Stony Brook, and Santa Clara.
  • In 2001 Fenway Sports Group, parent company of the Boston Red Sox and Liverpool Football Club, entered into an MMR agreement with Boston College. And earlier this year the Tampa Bay Lightning hockey team began handling multimedia marketing for the University of South Florida.

Potential new entrants abound. Most obviously, sports networks like ESPN could readily follow Fox Sports’ lead and advertising firms could follow Van Wagner’s. These companies have existing relationships and expertise that position them for easy entry into the MMR business. Moreover, there are already several companies that handle the trademark licensing for schools, any of which could move into the MMR management business, as well; both IMG and Learfield already handle licensing for a number of schools. Most notably, Fermata Partners, founded in 2012 by former IMG employees and acquired in 2015 by CAA Sports (a division of Creative Artists Agency), has trademark licensing agreements with Georgia, Kentucky, Miami, Notre Dame, Oregon, Virginia, and Wisconsin. It could easily expand into selling MMR rights for these and other schools. Other licensing firms like Exemplar (which handles licensing at Columbia) and 289c (which handles licensing at Texas and Ohio State) could also easily expand into MMR.

Given the relatively trivial economies of scale, the minimum viable scale for a new entrant appears to be approximately one school — a size that each school’s in-house operations, of course, automatically meets. Moreover, the Peak Sports, Fenway, and Tampa Bay Lightning examples suggest that there may be particular benefits to local, regional, or category specialization, suggesting that innovative, new entry is not only possible, but even likely, as the business continues to evolve.

Conclusion

A merger between IMG and Learfield should not raise any antitrust issues. College sports is a small slice of the total advertising market. Even a so-called “juggernaut” in college sports multimedia rights is a small bit in the broader market of multimedia marketing.

The demonstrated entry of new competitors and the transitions of schools from one provider to another or to bringing MMR management in-house, indicates that no competitor has any measurable market power that can disadvantage schools or advertisers.

The term “juggernaut” entered the English language because of misinterpretation and exaggeration of actual events. Fears of the IMG/Learfield merger crushing competition is similarly based on a misinterpretation of two-sided markets and misunderstanding of the reality of the of the market for college multimedia rights management. Importantly, the case is also a cautionary tale for those who would identify narrow, contract-, channel-, or platform-specific relevant markets in circumstances where a range of intermediaries and direct relationships can compete to offer the same service as those being scrutinized. Antitrust advocates have a long and inglorious history of defining markets by channels of distribution or other convenient, yet often economically inappropriate, combinations of firms or products. Yet the presence of marketing or other intermediaries does not automatically transform a basic, commercial relationship into a novel, two-sided market necessitating narrow market definitions and creative economics.

Last week concluded round 3 of Congressional hearings on mergers in the healthcare provider and health insurance markets. Much like the previous rounds, the hearing saw predictable representatives, of predictable constituencies, saying predictable things.

The pattern is pretty clear: The American Hospital Association (AHA) makes the case that mergers in the provider market are good for consumers, while mergers in the health insurance market are bad. A scholar or two decries all consolidation in both markets. Another interested group, like maybe the American Medical Association (AMA), also criticizes the mergers. And it’s usually left to a representative of the insurance industry, typically one or more of the merging parties themselves, or perhaps a scholar from a free market think tank, to defend the merger.

Lurking behind the public and politicized airings of these mergers, and especially the pending Anthem/Cigna and Aetna/Humana health insurance mergers, is the Affordable Care Act (ACA). Unfortunately, the partisan politics surrounding the ACA, particularly during this election season, may be trumping the sensible economic analysis of the competitive effects of these mergers.

In particular, the partisan assessments of the ACA’s effect on the marketplace have greatly colored the Congressional (mis-)understandings of the competitive consequences of the mergers.  

Witness testimony and questions from members of Congress at the hearings suggest that there is widespread agreement that the ACA is encouraging increased consolidation in healthcare provider markets, for example, but there is nothing approaching unanimity of opinion in Congress or among interested parties regarding what, if anything, to do about it. Congressional Democrats, for their part, have insisted that stepped up vigilance, particularly of health insurance mergers, is required to ensure that continued competition in health insurance markets isn’t undermined, and that the realization of the ACA’s objectives in the provider market aren’t undermined by insurance companies engaging in anticompetitive conduct. Meanwhile, Congressional Republicans have generally been inclined to imply (or outright state) that increased concentration is bad, so that they can blame increasing concentration and any lack of competition on the increased regulatory costs or other effects of the ACA. Both sides appear to be missing the greater complexities of the story, however.

While the ACA may be creating certain impediments in the health insurance market, it’s also creating some opportunities for increased health insurance competition, and implementing provisions that should serve to hold down prices. Furthermore, even if the ACA is encouraging more concentration, those increases in concentration can’t be assumed to be anticompetitive. Mergers may very well be the best way for insurers to provide benefits to consumers in a post-ACA world — that is, the world we live in. The ACA may have plenty of negative outcomes, and there may be reasons to attack the ACA itself, but there is no reason to assume that any increased concentration it may bring about is a bad thing.

Asking the right questions about the ACA

We don’t need more self-serving and/or politicized testimony We need instead to apply an economic framework to the competition issues arising from these mergers in order to understand their actual, likely effects on the health insurance marketplace we have. This framework has to answer questions like:

  • How do we understand the effects of the ACA on the marketplace?
    • In what ways does the ACA require us to alter our understanding of the competitive environment in which health insurance and healthcare are offered?
    • Does the ACA promote concentration in health insurance markets?
    • If so, is that a bad thing?
  • Do efficiencies arise from increased integration in the healthcare provider market?
  • Do efficiencies arise from increased integration in the health insurance market?
  • How do state regulatory regimes affect the understanding of what markets are at issue, and what competitive effects are likely, for antitrust analysis?
  • What are the potential competitive effects of increased concentration in the health care markets?
  • Does increased health insurance market concentration exacerbate or counteract those effects?

Beginning with this post, at least a few of us here at TOTM will take on some of these issues, as part of a blog series aimed at better understanding the antitrust law and economics of the pending health insurance mergers.

Today, we will focus on the ambiguous competitive implications of the ACA. Although not a comprehensive analysis, in this post we will discuss some key insights into how the ACA’s regulations and subsidies should inform our assessment of the competitiveness of the healthcare industry as a whole, and the antitrust review of health insurance mergers in particular.

The ambiguous effects of the ACA

It’s an understatement to say that the ACA is an issue of great political controversy. While many Democrats argue that it has been nothing but a boon to consumers, Republicans usually have nothing good to say about the law’s effects. But both sides miss important but ambiguous effects of the law on the healthcare industry. And because they miss (or disregard) this ambiguity for political reasons, they risk seriously misunderstanding the legal and economic implications of the ACA for healthcare industry mergers.

To begin with, there are substantial negative effects, of course. Requiring insurance companies to accept patients with pre-existing conditions reduces the ability of insurance companies to manage risk. This has led to upward pricing pressure for premiums. While the mandate to buy insurance was supposed to help bring more young, healthy people into the risk pool, so far the projected signups haven’t been realized.

The ACA’s redefinition of what is an acceptable insurance policy has also caused many consumers to lose the policy of their choice. And the ACA’s many regulations, such as the Minimum Loss Ratio requiring insurance companies to spend 80% of premiums on healthcare, have squeezed the profit margins of many insurance companies, leading, in some cases, to exit from the marketplace altogether and, in others, to a reduction of new marketplace entry or competition in other submarkets.

On the other hand, there may be benefits from the ACA. While many insurers participated in private exchanges even before the ACA-mandated health insurance exchanges, the increased consumer education from the government’s efforts may have helped enrollment even in private exchanges, and may also have helped to keep premiums from increasing as much as they would have otherwise. At the same time, the increased subsidies for individuals have helped lower-income people afford those premiums. Some have even argued that increased participation in the on-demand economy can be linked to the ability of individuals to buy health insurance directly. On top of that, there has been some entry into certain health insurance submarkets due to lower barriers to entry (because there is less need for agents to sell in a new market with the online exchanges). And the changes in how Medicare pays, with a greater focus on outcomes rather than services provided, has led to the adoption of value-based pricing from both health care providers and health insurance companies.

Further, some of the ACA’s effects have  decidedly ambiguous consequences for healthcare and health insurance markets. On the one hand, for example, the ACA’s compensation rules have encouraged consolidation among healthcare providers, as noted. One reason for this is that the government gives higher payments for Medicare services delivered by a hospital versus an independent doctor. Similarly, increased regulatory burdens have led to higher compliance costs and more consolidation as providers attempt to economize on those costs. All of this has happened perhaps to the detriment of doctors (and/or patients) who wanted to remain independent from hospitals and larger health network systems, and, as a result, has generally raised costs for payors like insurers and governments.

But much of this consolidation has also arguably led to increased efficiency and greater benefits for consumers. For instance, the integration of healthcare networks leads to increased sharing of health information and better analytics, better care for patients, reduced overhead costs, and other efficiencies. Ultimately these should translate into higher quality care for patients. And to the extent that they do, they should also translate into lower costs for insurers and lower premiums — provided health insurers are not prevented from obtaining sufficient bargaining power to impose pricing discipline on healthcare providers.

In other words, both the AHA and AMA could be right as to different aspects of the ACA’s effects.

Understanding mergers within the regulatory environment

But what they can’t say is that increased consolidation per se is clearly problematic, nor that, even if it is correlated with sub-optimal outcomes, it is consolidation causing those outcomes, rather than something else (like the ACA) that is causing both the sub-optimal outcomes as well as consolidation.

In fact, it may well be the case that increased consolidation improves overall outcomes in healthcare provider and health insurance markets relative to what would happen under the ACA absent consolidation. For Congressional Democrats and others interested in bolstering the ACA and offering the best possible outcomes for consumers, reflexively challenging health insurance mergers because consolidation is “bad,” may be undermining both of these objectives.

Meanwhile, and for the same reasons, Congressional Republicans who decry Obamacare should be careful that they do not likewise condemn mergers under what amounts to a “big is bad” theory that is inconsistent with the rigorous law and economics approach that they otherwise generally support. To the extent that the true target is not health insurance industry consolidation, but rather underlying regulatory changes that have encouraged that consolidation, scoring political points by impugning mergers threatens both health insurance consumers in the short run, as well as consumers throughout the economy in the long run (by undermining the well-established economic critiques of a reflexive “big is bad” response).

It is simply not clear that ACA-induced health insurance mergers are likely to be anticompetitive. In fact, because the ACA builds on state regulation of insurance providers, requiring greater transparency and regulatory review of pricing and coverage terms, it seems unlikely that health insurers would be free to engage in anticompetitive price increases or reduced coverage that could harm consumers.

On the contrary, the managerial and transactional efficiencies from the proposed mergers, combined with greater bargaining power against now-larger providers are likely to lead to both better quality care and cost savings passed-on to consumers. Increased entry, at least in part due to the ACA in most of the markets in which the merging companies will compete, along with integrated health networks themselves entering and threatening entry into insurance markets, will almost certainly lead to more consumer cost savings. In the current regulatory environment created by the ACA, in other words, insurance mergers have considerable upside potential, with little downside risk.

Conclusion

In sum, regardless of what one thinks about the ACA and its likely effects on consumers, it is not clear that health insurance mergers, especially in a post-ACA world, will be harmful.

Rather, assessing the likely competitive effects of health insurance mergers entails consideration of many complicated (and, unfortunately, politicized) issues. In future blog posts we will discuss (among other things): the proper treatment of efficiencies arising from health insurance mergers, the appropriate geographic and product markets for health insurance merger reviews, the role of state regulations in assessing likely competitive effects, and the strengths and weaknesses of arguments for potential competitive harms arising from the mergers.