[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

The language of the federal antitrust laws is extremely general. Over more than a century, the federal courts have applied common-law techniques to construe this general language to provide guidance to the private sector as to what does or does not run afoul of the law. The interpretive process has been fraught with some uncertainty, as judicial approaches to antitrust analysis have changed several times over the past century. Nevertheless, until very recently, judges and enforcers had converged toward relying on a consumer welfare standard as the touchstone for antitrust evaluations (see my antitrust primer here, for an overview).

While imperfect and subject to potential error in application—a problem of legal interpretation generally—the consumer welfare principle has worked rather well as the focus both for antitrust-enforcement guidance and judicial decision-making. The general stability and predictability of antitrust under a consumer welfare framework has advanced the rule of law. It has given businesses sufficient information to plan transactions in a manner likely to avoid antitrust liability. It thereby has cabined uncertainty and increased the probability that private parties would enter welfare-enhancing commercial arrangements, to the benefit of society.

In a very thoughtful 2017 speech, then Acting Assistant Attorney General for Antitrust Andrew Finch commented on the importance of the rule of law to principled antitrust enforcement. He noted:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

That is certainly not to say there has not been positive change in the antitrust laws in the past, or that we would have been better off without those changes. U.S. antitrust law has been refined, and occasionally recalibrated, with the courts playing their appropriate interpretive role. And enforcers must always be on the watch for new or evolving threats to competition.  As markets evolve and products develop over time, our analysis adapts. But as those changes occur, we pursue reliability and consistency in application in the antitrust laws as much as possible.

Indeed, we have enjoyed remarkable continuity and consensus for many years. Antitrust law in the U.S. has not been a “paradox” for quite some time, but rather a stable and valuable law enforcement regime with appropriately widespread support.

Unfortunately, policy decisions taken by the new Federal Trade Commission (FTC) leadership in recent weeks have rejected antitrust continuity and consensus. They have injected substantial uncertainty into the application of competition-law enforcement by the FTC. This abrupt change in emphasis undermines the rule of law and threatens to reduce economic welfare.

As of now, the FTC’s departure from the rule of law has been notable in two areas:

  1. Its rejection of previous guidance on the agency’s “unfair methods of competition” authority, the FTC’s primary non-merger-related enforcement tool; and
  2. Its new advice rejecting time limits for the review of generally routine proposed mergers.

In addition, potential FTC rulemakings directed at “unfair methods of competition” would, if pursued, prove highly problematic.

Rescission of the Unfair Methods of Competition Policy Statement

The FTC on July 1 voted 3-2 to rescind the 2015 FTC Policy Statement Regarding Unfair Methods of Competition under Section 5 of the FTC Act (UMC Policy Statement).

The bipartisan UMC Policy Statement has originally been supported by all three Democratic commissioners, including then-Chairwoman Edith Ramirez. The policy statement generally respected and promoted the rule of law by emphasizing that, in applying the facially broad “unfair methods of competition” (UMC) language, the FTC would be guided by the well-established principles of the antitrust rule of reason (including considering any associated cognizable efficiencies and business justifications) and the consumer welfare standard. The FTC also explained that it would not apply “standalone” Section 5 theories to conduct that would violate the Sherman or Clayton Acts.

In short, the UMC Policy Statement sent a strong signal that the commission would apply UMC in a manner fully consistent with accepted and well-understood antitrust policy principles. As in the past, the vast bulk of FTC Section 5 prosecutions would be brought against conduct that violated the core antitrust laws. Standalone Section 5 cases would be directed solely at those few practices that harmed consumer welfare and competition, but somehow fell into a narrow crack in the basic antitrust statutes (such as, perhaps, “invitations to collude” that lack plausible efficiency justifications). Although the UMC Statement did not answer all questions regarding what specific practices would justify standalone UMC challenges, it substantially limited business uncertainty by bringing Section 5 within the boundaries of settled antitrust doctrine.

The FTC’s announcement of the UMC Policy Statement rescission unhelpfully proclaimed that “the time is right for the Commission to rethink its approach and to recommit to its mandate to police unfair methods of competition even if they are outside the ambit of the Sherman or Clayton Acts.” As a dissenting statement by Commissioner Christine S. Wilson warned, consumers would be harmed by the commission’s decision to prioritize other unnamed interests. And as Commissioner Noah Joshua Phillips stressed in his dissent, the end result would be reduced guidance and greater uncertainty.

In sum, by suddenly leaving private parties in the dark as to how to conform themselves to Section 5’s UMC requirements, the FTC’s rescission offends the rule of law.

New Guidance to Parties Considering Mergers

For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:

  1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
  2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.

Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.

Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.

Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”

The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.

Perhaps more significantly, as three prominent antitrust practitioners point out, the FTC’s warning letters states that:

[T]he FTC may challenge deals that “threaten to reduce competition and harm consumers, workers, and honest businesses.” Adding in harm to both “workers and honest businesses” implies that the FTC may be considering more ways that transactions can have an adverse impact other than just harm to competition and consumers [citation omitted].

Because consensus antitrust merger analysis centers on consumer welfare, not the protection of labor or business interests, any suggestion that the FTC may be extending its reach to these new areas is inconsistent with established legal principles and generates new business-planning risks.

More generally, the Aug. 6 FTC “blog post could be viewed as an attempt to modify the temporal framework of the HSR Act”—in effect, an effort to displace an implicit statutory understanding in favor of an agency diktat, contrary to the rule of law. Commissioner Wilson sees the blog post as a means to keep investigations open indefinitely and, thus, an attack on the decades-old HSR framework for handling most merger reviews in an expeditious fashion (see here). Commissioner Phillips is concerned about an attempt to chill legal M&A transactions across the board, particularly unfortunate when there is no reason to conclude that particular transactions are illegal (see here).

Finally, the historical record raises serious questions about the “resource constraint” justification for the FTC’s new merger review policies:

Through the end of July 2021, more than 2,900 transactions were reported to the FTC. It is not clear, however, whether these record-breaking HSR filing numbers have led (or will lead) to more deals being investigated. Historically, only about 13 percent of all deals reported are investigated in some fashion, and roughly 3 percent of all deals reported receive a more thorough, substantive review through the issuance of a Second Request. Even if more deals are being reported, for the majority of transactions, the HSR process is purely administrative, raising no antitrust concerns, and, theoretically, uses few, if any, agency resources. [Citations omitted.]

Proposed FTC Competition Rulemakings

The new FTC leadership is strongly considering competition rulemakings. As I explained in a recent Truth on the Market post, such rulemakings would fail a cost-benefit test. They raise serious legal risks for the commission and could impose wasted resource costs on the FTC and on private parties. More significantly, they would raise two very serious economic policy concerns:

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules. [Emphasis added.]

In short, common law antitrust adjudication, focused on the consumer welfare standard, has done a good job of promoting a vibrant competitive economy in an efficient fashion. FTC competition rulemaking would not.

Conclusion

Recent FTC actions have undermined consensus antitrust-enforcement standards and have departed from established merger-review procedures with respect to seemingly uncontroversial consolidations. Those decisions have imposed costly uncertainty on the business sector and are thereby likely to disincentivize efficiency-seeking arrangements. What’s more, by implicitly rejecting consensus antitrust principles, they denigrate the primacy of the rule of law in antitrust enforcement. The FTC’s pursuit of competition rulemaking would further damage the rule of law by imposing arbitrary strictures that ignore matter-specific considerations bearing on the justifications for particular business decisions.

Fortunately, these are early days in the Biden administration. The problematic initial policy decisions delineated in this comment could be reversed based on further reflection and deliberation within the commission. Chairwoman Lina Khan and her fellow Democratic commissioners would benefit by consulting more closely with Commissioners Wilson and Phillips to reach agreement on substantive and procedural enforcement policies that are better tailored to promote consumer welfare and enhance vibrant competition. Such policies would benefit the U.S. economy in a manner consistent with the rule of law.

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

Over the past decade and a half, virtually every branch of the federal government has taken steps to weaken the patent system. As reflected in President Joe Biden’s July 2021 executive order, these restraints on patent enforcement are now being coupled with antitrust policies that, in large part, adopt a “big is bad” approach in place of decades of economically grounded case law and agency guidelines.

This policy bundle is nothing new. It largely replicates the innovation policies pursued during the late New Deal and the postwar decades. That historical experience suggests that a “weak-patent/strong-antitrust” approach is likely to encourage neither innovation nor competition.

The Overlooked Shortfalls of New Deal Innovation Policy

Starting in the early 1930s, the U.S. Supreme Court issued a sequence of decisions that raised obstacles to patent enforcement. The Franklin Roosevelt administration sought to take this policy a step further, advocating compulsory licensing for all patents. While Congress did not adopt this proposal, it was partially implemented as a de facto matter through antitrust enforcement. Starting in the early 1940s and continuing throughout the postwar decades, the antitrust agencies secured judicial precedents that treated a broad range of licensing practices as per se illegal. Perhaps most dramatically, the U.S. Justice Department (DOJ) secured more than 100 compulsory licensing orders against some of the nation’s largest companies. 

The rationale behind these policies was straightforward. By compelling access to incumbents’ patented technologies, courts and regulators would lower barriers to entry and competition would intensify. The postwar economy declined to comply with policymakers’ expectations. Implementation of a weak-IP/strong-antitrust innovation policy over the course of four decades yielded the opposite of its intended outcome. 

Market concentration did not diminish, turnover in market leadership was slow, and private research and development (R&D) was confined mostly to the research labs of the largest corporations (who often relied on generous infusions of federal defense funding). These tendencies are illustrated by the dramatically unequal allocation of innovation capital in the postwar economy.  As of the late 1950s, small firms represented approximately 7% of all private U.S. R&D expenditures.  Two decades later, that figure had fallen even further. By the late 1970s, patenting rates had plunged, and entrepreneurship and innovation were in a state of widely lamented decline.

Why Weak IP Raises Entry Costs and Promotes Concentration

The decline in entrepreneurial innovation under a weak-IP regime was not accidental. Rather, this outcome can be derived logically from the economics of information markets.

Without secure IP rights to establish exclusivity, engage securely with business partners, and deter imitators, potential innovator-entrepreneurs had little hope to obtain funding from investors. In contrast, incumbents could fund R&D internally (or with federal funds that flowed mostly to the largest computing, communications, and aerospace firms) and, even under a weak-IP regime, were protected by difficult-to-match production and distribution efficiencies. As a result, R&D mostly took place inside the closed ecosystems maintained by incumbents such as AT&T, IBM, and GE.

Paradoxically, the antitrust campaign against patent “monopolies” most likely raised entry barriers and promoted industry concentration by removing a critical tool that smaller firms might have used to challenge incumbents that could outperform on every competitive parameter except innovation. While the large corporate labs of the postwar era are rightly credited with technological breakthroughs, incumbents such as AT&T were often slow in transforming breakthroughs in basic research into commercially viable products and services for consumers. Without an immediate competitive threat, there was no rush to do so. 

Back to the Future: Innovation Policy in the New New Deal

Policymakers are now at work reassembling almost the exact same policy bundle that ended in the innovation malaise of the 1970s, accompanied by a similar reliance on public R&D funding disbursed through administrative processes. However well-intentioned, these processes are inherently exposed to political distortions that are absent in an innovation environment that relies mostly on private R&D funding governed by price signals. 

This policy bundle has emerged incrementally since approximately the mid-2000s, through a sequence of complementary actions by every branch of the federal government.

  • In 2011, Congress enacted the America Invents Act, which enables any party to challenge the validity of an issued patent through the U.S. Patent and Trademark Office’s (USPTO) Patent Trial and Appeals Board (PTAB). Since PTAB’s establishment, large information-technology companies that advocated for the act have been among the leading challengers.
  • In May 2021, the Office of the U.S. Trade Representative (USTR) declared its support for a worldwide suspension of IP protections over Covid-19-related innovations (rather than adopting the more nuanced approach of preserving patent protections and expanding funding to accelerate vaccine distribution).  
  • President Biden’s July 2021 executive order states that “the Attorney General and the Secretary of Commerce are encouraged to consider whether to revise their position on the intersection of the intellectual property and antitrust laws, including by considering whether to revise the Policy Statement on Remedies for Standard-Essential Patents Subject to Voluntary F/RAND Commitments.” This suggests that the administration has already determined to retract or significantly modify the 2019 joint policy statement in which the DOJ, USPTO, and the National Institutes of Standards and Technology (NIST) had rejected the view that standard-essential patent owners posed a high risk of patent holdup, which would therefore justify special limitations on enforcement and licensing activities.

The history of U.S. technology markets and policies casts great doubt on the wisdom of this weak-IP policy trajectory. The repeated devaluation of IP rights is likely to be a “lose-lose” approach that does little to promote competition, while endangering the incentive and transactional structures that sustain robust innovation ecosystems. A weak-IP regime is particularly likely to disadvantage smaller firms in biotech, medical devices, and certain information-technology segments that rely on patents to secure funding from venture capital and to partner with larger firms that can accelerate progress toward market release. The BioNTech/Pfizer alliance in the production and distribution of a Covid-19 vaccine illustrates how patents can enable such partnerships to accelerate market release.  

The innovative contribution of BioNTech is hardly a one-off occurrence. The restoration of robust patent protection in the early 1980s was followed by a sharp increase in the percentage of private R&D expenditures attributable to small firms, which jumped from about 5% as of 1980 to 21% by 1992. This contrasts sharply with the unequal allocation of R&D activities during the postwar period.

Remarkably, the resurgence of small-firm innovation following the strong-IP policy shift, starting in the late 20th century, mimics tendencies observed during the late 19th and early-20th centuries, when U.S. courts provided a hospitable venue for patent enforcement; there were few antitrust constraints on licensing activities; and innovation was often led by small firms in partnership with outside investors. This historical pattern, encompassing more than a century of U.S. technology markets, strongly suggests that strengthening IP rights tends to yield a policy “win-win” that bolsters both innovative and competitive intensity. 

An Alternate Path: ‘Bottom-Up’ Innovation Policy

To be clear, the alternative to the policy bundle of weak-IP/strong antitrust does not consist of a simple reversion to blind enforcement of patents and lax administration of the antitrust laws. A nuanced innovation policy would couple modern antitrust’s commitment to evidence-based enforcement—which, in particular cases, supports vigorous intervention—with a renewed commitment to protecting IP rights for innovator-entrepreneurs. That would promote competition from the “bottom up” by bolstering maverick innovators who are well-positioned to challenge (or sometimes partner with) incumbents and maintaining the self-starting engine of creative disruption that has repeatedly driven entrepreneurial innovation environments. Tellingly, technology incumbents have often been among the leading advocates for limiting patent and copyright protections.  

Advocates of a weak-patent/strong-antitrust policy believe it will enhance competitive and innovative intensity in technology markets. History suggests that this combination is likely to produce the opposite outcome.  

Jonathan M. Barnett is the Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law. This post is based on the author’s recent publications, Innovators, Firms, and Markets: The Organizational Logic of Intellectual Property (Oxford University Press 2021) and “The Great Patent Grab,” in Battles Over Patents: History and the Politics of Innovation (eds. Stephen H. Haber and Naomi R. Lamoreaux, Oxford University Press 2021).

In recent years, a diverse cross-section of advocates and politicians have leveled criticisms at Section 230 of the Communications Decency Act and its grant of legal immunity to interactive computer services. Proposed legislative changes to the law have been put forward by both Republicans and Democrats.

It remains unclear whether Congress (or the courts) will amend Section 230, but any changes are bound to expand the scope, uncertainty, and expense of content risks. That’s why it’s important that such changes be developed and implemented in ways that minimize their potential to significantly disrupt and harm online activity. This piece focuses on those insurable content risks that most frequently result in litigation and considers the effect of the direct and indirect costs caused by frivolous suits and lawfare, not just the ultimate potential for a court to find liability. The experience of the 1980s asbestos-litigation crisis offers a warning of what could go wrong.

Enacted in 1996, Section 230 was intended to promote the Internet as a diverse medium for discourse, cultural development, and intellectual activity by shielding interactive computer services from legal liability when blocking or filtering access to obscene, harassing, or otherwise objectionable content. Absent such immunity, a platform hosting content produced by third parties could be held equally responsible as the creator for claims alleging defamation or invasion of privacy.

In the current legislative debates, Section 230’s critics on the left argue that the law does not go far enough to combat hate speech and misinformation. Critics on the right claim the law protects censorship of dissenting opinions. Legal challenges to the current wording of Section 230 arise primarily from what constitutes an “interactive computer service,” “good faith” restriction of content, and the grant of legal immunity, regardless of whether the restricted material is constitutionally protected. 

While Congress and various stakeholders debate various alternate statutory frameworks, several test cases simultaneously have been working their way through the judicial system and some states have either passed or are considering legislation to address complaints with Section 230. Some have suggested passing new federal legislation classifying online platforms as common carriers as an alternate approach that does not involve amending or repealing Section 230. Regardless of the form it may take, change to the status quo is likely to increase the risk of litigation and liability for those hosting or publishing third-party content.

The Nature of Content Risk

The class of individuals and organizations exposed to content risk has never been broader. Any information, content, or communication that is created, gathered, compiled, or amended can be considered “material” which, when disseminated to third parties, may be deemed “publishing.” Liability can arise from any step in that process. Those who republish material are generally held to the same standard of liability as if they were the original publisher. (See, e.g., Rest. (2d) of Torts § 578 with respect to defamation.)

Digitization has simultaneously reduced the cost and expertise required to publish material and increased the potential reach of that material. Where it was once limited to books, newspapers, and periodicals, “publishing” now encompasses such activities as creating and updating a website; creating a podcast or blog post; or even posting to social media. Much of this activity is performed by individuals and businesses who have only limited experience with the legal risks associated with publishing.

This is especially true regarding the use of third-party material, which is used extensively by both sophisticated and unsophisticated platforms. Platforms that host third-party-generated content—e.g., social media or websites with comment sections—have historically engaged in only limited vetting of that content, although this is changing. When combined with the potential to reach consumers far beyond the original platform and target audience—lasting digital traces that are difficult to identify and remove—and the need to comply with privacy and other statutory requirements, the potential for all manner of “publishers” to incur legal liability has never been higher.

Even sophisticated legacy publishers struggle with managing the litigation that arises from these risks. There are a limited number of specialist counsel, which results in higher hourly rates. Oversight of legal bills is not always effective, as internal counsel often have limited resources to manage their daily responsibilities and litigation. As a result, legal fees often make up as much as two-thirds of the average claims cost. Accordingly, defense spending and litigation management are indirect, but important, risks associated with content claims.

Effective risk management is any publisher’s first line of defense. The type and complexity of content risk management varies significantly by organization, based on its size, resources, activities, risk appetite, and sophistication. Traditional publishers typically have a formal set of editorial guidelines specifying policies governing the creation of content, pre-publication review, editorial-approval authority, and referral to internal and external legal counsel. They often maintain a library of standardized contracts; have a process to periodically review and update those wordings; and a process to verify the validity of a potential licensor’s rights. Most have formal controls to respond to complaints and to retraction/takedown requests.

Insuring Content Risks

Insurance is integral to most publishers’ risk-management plans. Content coverage is present, to some degree, in most general liability policies (i.e., for “advertising liability”). Specialized coverage—commonly referred to as “media” or “media E&O”—is available on a standalone basis or may be packaged with cyber-liability coverage. Terms of specialized coverage can vary significantly, but generally provides at least basic coverage for the three primary content risks of defamation, copyright infringement, and invasion of privacy.

Insureds typically retain the first dollar loss up to a specific dollar threshold. They may also retain a coinsurance percentage of every dollar thereafter in partnership with their insurer. For example, an insured may be responsible for the first $25,000 of loss, and for 10% of loss above that threshold. Such coinsurance structures often are used by insurers as a non-monetary tool to help control legal spending and to incentivize an organization to employ effective oversight of counsel’s billing practices.

The type and amount of loss retained will depend on the insured’s size, resources, risk profile, risk appetite, and insurance budget. Generally, but not always, increases in an insured’s retention or an insurer’s attachment (e.g., raising the threshold to $50,000, or raising the insured’s coinsurance to 15%) will result in lower premiums. Most insureds will seek the smallest retention feasible within their budget. 

Contract limits (the maximum coverage payout available) will vary based on the same factors. Larger policyholders often build a “tower” of insurance made up of multiple layers of the same or similar coverage issued by different insurers. Two or more insurers may partner on the same “quota share” layer and split any loss incurred within that layer on a pre-agreed proportional basis.  

Navigating the strategic choices involved in developing an insurance program can be complex, depending on an organization’s risks. Policyholders often use commercial brokers to aide them in developing an appropriate risk-management and insurance strategy that maximizes coverage within their budget and to assist with claims recoveries. This is particularly important for small and mid-sized insureds who may lack the sophistication or budget of larger organizations. Policyholders and brokers try to minimize the gaps in coverage between layers and among quota-share participants, but such gaps can occur, leaving a policyholder partially self-insured.

An organization’s options to insure its content risk may also be influenced by the dynamics of the overall insurance market or within specific content lines. Underwriters are not all created equal; it is a challenging responsibility requiring a level of prediction, and some underwriters may fail to adequately identify and account for certain risks. It can also be challenging to accurately measure risk aggregation and set appropriate reserves. An insurer’s appetite for certain lines and the availability of supporting reinsurance can fluctuate based on trends in the general capital markets. Specialty media/content coverage is a small niche within the global commercial insurance market, which makes insurers in this line more sensitive to these general trends.

Litigation Risks from Changes to Section 230

A full repeal or judicial invalidation of Section 230 generally would make every platform responsible for all the content they disseminate, regardless of who created the material requiring at least some additional editorial review. This would significantly disadvantage those platforms that host a significant volume of third-party content. Internet service providers, cable companies, social media, and product/service review companies would be put under tremendous strain, given the daily volume of content produced. To reduce the risk that they serve as a “deep pocket” target for plaintiffs, they would likely adopt more robust pre-publication screening of content and authorized third-parties; limit public interfaces; require registration before a user may publish content; employ more reactive complaint response/takedown policies; and ban problem users more frequently. Small and mid-sized enterprises (SMEs), as well as those not focused primarily on the business of publishing, would likely avoid many interactive functions altogether. 

A full repeal would be, in many ways, a blunderbuss approach to dealing with criticisms of Section 230, and would cause as many or more problems as it solves. In the current polarized environment, it also appears unlikely that Congress will reach bipartisan agreement on amended language for Section 230, or to classify interactive computer services as common carriers, given that the changes desired by the political left and right are so divergent. What may be more likely is that courts encounter a test case that prompts them to clarify the application of the existing statutory language—i.e., whether an entity was acting as a neutral platform or a content creator, whether its conduct was in “good faith,” and whether the material is “objectionable” within the meaning of the statute.

A relatively greater frequency of litigation is almost inevitable in the wake of any changes to the status quo, whether made by Congress or the courts. Major litigation would likely focus on those social-media platforms at the center of the Section 230 controversy, such as Facebook and Twitter, given their active role in these issues, deep pockets and, potentially, various admissions against interest helpful to plaintiffs regarding their level of editorial judgment. SMEs could also be affected in the immediate wake of a change to the statute or its interpretation. While SMEs are likely to be implicated on a smaller scale, the impact of litigation could be even more damaging to their viability if they are not adequately insured.

Over time, the boundaries of an amended Section 230’s application and any consequential effects should become clearer as courts develop application criteria and precedent is established for different fact patterns. Exposed platforms will likely make changes to their activities and risk-management strategies consistent with such developments. Operationally, some interactive features—such as comment sections or product and service reviews—may become less common.

In the short and medium term, however, a period of increased and unforeseen litigation to resolve these issues is likely to prove expensive and damaging. Insurers of content risks are likely to bear the brunt of any changes to Section 230, because these risks and their financial costs would be new, uncertain, and not incorporated into historical pricing of content risk. 

Remembering the Asbestos Crisis

The introduction of a new exposure or legal risk can have significant financial effects on commercial insurance carriers. New and revised risks must be accounted for in the assumptions, probabilities, and load factors used in insurance pricing and reserving models. Even small changes in those values can have large aggregate effects, which may undermine confidence in those models, complicate obtaining reinsurance, or harm an insurer’s overall financial health.

For example, in the 1980s, certain courts adopted the triple-trigger and continuous trigger methods[1] of determining when a policyholder could access coverage under an “occurrence” policy for asbestos claims. As a result, insurers paid claims under policies dating back to the early 1900s and, in some cases, under all policies from that date until the date of the claim. Such policies were written when mesothelioma related to asbestos was unknown and not incorporated into the policy pricing.

Insurers had long-since released reserves from the decades-old policy years, so those resources were not available to pay claims. Nor could underwriters retroactively increase premiums for the intervening years and smooth out the cost of these claims. This created extreme financial stress for impacted insurers and reinsurers, with some ultimately rendered insolvent. Surviving carriers responded by drastically reducing coverage and increasing prices, which resulted in a major capacity shortage that resolved only after the creation of the Bermuda insurance and reinsurance market. 

The asbestos-related liability crisis represented a perfect storm that is unlikely to be replicated. Given the ubiquitous nature of digital content, however, any drastic or misconceived changes to Section 230 protections could still cause significant disruption to the commercial insurance market. 

Content risk is covered, at least in part, by general liability and many cyber policies, but it is not currently a primary focus for underwriters. Specialty media underwriters are more likely to be monitoring Section 230 risk, but the highly competitive market will make it difficult for them to respond to any changes with significant price increases. In addition, the current market environment for U.S. property and casualty insurance generally is in the midst of correcting for years of inadequate pricing, expanding coverage, developing exposures, and claims inflation. It would be extremely difficult to charge an adequate premium increase if the potential severity of content risk were to increase suddenly.

In the face of such risk uncertainty and challenges to adequately increasing premiums, underwriters would likely seek to reduce their exposure to online content risks, i.e., by reducing the scope of coverage, reducing limits, and increasing retentions. How these changes would manifest, and the pain for all involved, would likely depend on how quickly such changes in policyholders’ risk profiles manifest. 

Small or specialty carriers caught unprepared could be forced to exit the market if they experienced a sharp spike in claims or unexpected increase in needed reserves. Larger, multiline carriers may respond by voluntarily reducing or withdrawing their participation in this space. Insurers exposed to ancillary content risk may simply exclude it from cover if adequate price increases are impractical. Such reactions could result in content coverage becoming harder to obtain or unavailable altogether. This, in turn, would incentivize organizations to limit or avoid certain digital activities.

Finding a More Thoughtful Approach

The tension between calls for reform of Section 230 and the potential for disrupting online activity does not mean that political leaders and courts should ignore these issues. Rather, it means that what’s required is a thoughtful, clear, and predictable approach to any changes, with the goal of maximizing the clarity of the changes and their application and minimizing any resulting litigation. Regardless of whether accomplished through legislation or the judicial process, addressing the following issues could minimize the duration and severity of any period of harmful disruption regarding content-risk:

  1. Presumptive immunity – Including an express statement in the definition of “interactive computer service,” or inferring one judicially, to clarify that platforms hosting third-party content enjoy a rebuttable presumption that statutory immunity applies would discourage frivolous litigation as courts establish precedent defining the applicability of any other revisions. 
  1. Specify the grounds for losing immunity – Clarify, at a minimum, what constitutes “good faith” with respect to content restrictions and further clarify what material is or is not “objectionable,” as it relates to newsworthy content or actions that trigger loss of immunity.
  1. Specify the scope and duration of any loss of immunity – Clarify whether the loss of immunity is total, categorical, or specific to the situation under review and the duration of that loss of immunity, if applicable.
  1. Reinstatement of immunity, subject to burden-shifting – Clarify what a platform must do to reinstate statutory immunity on a go-forward basis and clarify that it bears the burden of proving its go-forward conduct entitled it to statutory protection.
  1. Address associated issues – Any clarification or interpretation should address other issues likely to arise, such as the effect and weight to be given to a platform’s application of its community standards, adherence to neutral takedown/complain procedures, etc. Care should be taken to avoid overcorrecting and creating a “heckler’s veto.” 
  1. Deferred effect – If change is made legislatively, the effective date should be deferred for a reasonable time to allow platforms sufficient opportunity to adjust their current risk-management policies, contractual arrangements, content publishing and storage practices, and insurance arrangements in a thoughtful, orderly fashion that accounts for the new rules.

Ultimately, legislative and judicial stakeholders will chart their own course to address the widespread dissatisfaction with Section 230. More important than any of these specific policy suggestions is the principle underpins them: that any changes incorporate due consideration for the potential direct and downstream harm that can be caused if policy is not clear, comprehensive, and designed to minimize unnecessary litigation. 

It is no surprise that, in the years since Section 230 of the Communications Decency Act was passed, the environment and risks associated with digital platforms have evolved or that those changes have created a certain amount of friction in the law’s application. Policymakers should employ a holistic approach when evaluating their legislative and judicial options to revise or clarify the application of Section 230. Doing so in a targeted, predictable fashion should help to mitigate or avoid the risk of increased litigation and other unintended consequences that might otherwise prove harmful to online platforms in the commercial insurance market.

Aaron Tilley is a senior insurance executive with more than 16 years of commercial insurance experience in executive management, underwriting, legal, and claims working in or with the U.S., Bermuda, and London markets. He has served as chief underwriting officer of a specialty media E&O and cyber-liability insurer and as coverage counsel representing international insurers with respect to a variety of E&O and advertising liability claims


[1] The triple-trigger method allowed a policy to be accessed based on the date of the injury-in-fact, manifestation of injury, or exposure to substances known to cause injury. The continuous trigger allowed all policies issued by an insurer, not just one, to be accessed if a triggering event could be established during the policy period.

The U.S. House this week passed H.R. 2668, the Consumer Protection and Recovery Act (CPRA), which authorizes the Federal Trade Commission (FTC) to seek monetary relief in federal courts for injunctions brought under Section 13(b) of the Federal Trade Commission Act.

Potential relief under the CPRA is comprehensive. It includes “restitution for losses, rescission or reformation of contracts, refund of money, return of property … and disgorgement of any unjust enrichment that a person, partnership, or corporation obtained as a result of the violation that gives rise to the suit.” What’s more, under the CPRA, monetary relief may be obtained for violations that occurred up to 10 years before the filing of the suit in which relief is requested by the FTC.

The Senate should reject the House version of the CPRA. Its monetary-recovery provisions require substantial narrowing if it is to pass cost-benefit muster.

The CPRA is a response to the Supreme Court’s April 22 decision in AMG Capital Management v. FTC, which held that Section 13(b) of the FTC Act does not authorize the commission to obtain court-ordered equitable monetary relief. As I explained in an April 22 Truth on the Market post, Congress’ response to the court’s holding should not be to grant the FTC carte blanche authority to obtain broad monetary exactions for any and all FTC Act violations. I argued that “[i]f Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices.”

Error cost and difficulties of calculation counsel against pursuing monetary recovery in FTC unfair methods of competition cases. As I explained in my post:

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I [false positives] error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

These error-cost and calculation difficulties became even more pronounced as of July 1. On that date, the FTC unwisely voted 3-2 to withdraw a bipartisan 2015 policy statement providing that the commission would apply consumer welfare and rule-of-reason (weighing efficiencies against anticompetitive harm) considerations in exercising its unfair methods of competition authority (see my commentary here). This means that, going forward, the FTC will arrogate to itself unbounded discretion to decide what competitive practices are “unfair.” Business uncertainty, and the costly risk aversion it engenders, would be expected to grow enormously if the FTC could extract monies from firms due to competitive behavior deemed “unfair,” based on no discernible neutral principle.

Error costs and calculation problems also strongly suggest that monetary relief in FTC consumer-protection matters should be limited to cases of fraud or clear deception. As I noted:

[M]atters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

In short, the Senate should rewrite its Section 13(b) amendments to authorize FTC monetary recoveries only when consumer fraud and dishonesty is shown.

Finally, the Senate would be wise to sharply pare back the House language that allows the FTC to seek monetary exactions based on conduct that is a decade old. Serious problems of making accurate factual determinations of economic effects and specific-damage calculations would arise after such a long period of time. Allowing retroactive determinations based on a shorter “look-back” period prior to the filing of a complaint (three years, perhaps) would appear to strike a better balance in allowing reasonable redress while controlling error costs.

For a potential entrepreneur, just how much time it will take to compete, and the barrier to entry that time represents, will vary greatly depending on the market he or she wishes to enter. A would-be competitor to the likes of Subway, for example, might not find the time needed to open a sandwich shop to be a substantial hurdle. Even where it does take a long time to bring a product to market, it may be possible to accelerate the timeline if the potential profits are sufficiently high. 

As Steven Salop notes in a recent paper, however, there may be cases where long periods of production time are intrinsic to a product: 

If entry takes a long time, then the fear of entry may not provide a substantial constraint on conduct. The firm can enjoy higher prices and profits until the entry occurs. Even if a strong entrant into the 12-year-old scotch market begins the entry process immediately upon announcement of the merger of its rivals, it will not be able to constrain prices for a long time. [emphasis added]

Salop’s point relates to the supply-side substitutability of Scotch whisky (sic — Scotch whisky is spelt without an “e”). That is, to borrow from the European Commission’s definition, whether “suppliers are able to switch production to the relevant products and market them in the short term.” Scotch is aged in wooden barrels for a number of years (at least three, but often longer) before being bottled and sold, and the value of Scotch usually increases with age. 

Due to this protracted manufacturing process, Salop argues, an entrant cannot compete with an incumbent dominant firm for however many years it would take to age the Scotch; they cannot produce the relevant product in the short term, no matter how high the profits collected by a monopolist are, and hence no matter how strong the incentive to enter the market. If I wanted to sell 12-year-old Scotch, to use Salop’s example, it would take me 12 years to enter the market. In the meantime, a dominant firm could extract monopoly rents, leading to higher prices for consumers. 

But can a whisky producer “enjoy higher prices and profits until … entry occurs”? A dominant firm in the 12-year-old Scotch market will not necessarily be immune to competition for the entire 12-year period it would take to produce a Scotch of the same vintage. There are various ways, both on the demand and supply side, that pressure could be brought to bear on a monopolist in the Scotch market.

One way could be to bring whiskies that are being matured for longer-maturity bottles (like 16- or 18-year-old Scotches) into service at the 12-year maturity point, shifting this supply to a market in which profits are now relatively higher. 

Alternatively, distilleries may try to produce whiskies that resemble 12-year old whiskies in flavor with younger batches. A 2013 article from The Scotsman discusses this possibility in relation to major Scottish whisky brand Macallan’s decision to switch to selling exclusively No-Age Statement (NAS — they do not bear an age on the bottle) whiskies: 

Experts explained that, for example, nine and 11-year-old whiskies—not yet ready for release under the ten and 12-year brands—could now be blended together to produce the “entry-level” Gold whisky immediately.

An aged Scotch cannot contain any whisky younger than the age stated on the bottle, but an NAS alternative can contain anything over three years (though older whiskies are often used to capture a flavor more akin to a 12-year dram). For many drinkers, NAS whiskies are a close substitute for 12-year-old whiskies. They often compete with aged equivalents on quality and flavor and can command similar prices to aged bottles in the 12-year category. More than 80% of bottles sold bear no age statement. While this figure includes non-premium bottles, the share of NAS whiskies traded at auction on the secondary market, presumably more likely to be premium, increased from 20% to 30% in the years between 2013 and 2018.

There are also whiskies matured outside of Scotland, in regions such as Taiwan and India, that can achieve flavor profiles akin to older whiskies more quickly, thanks to warmer climates and the faster chemical reactions inside barrels they cause. Further increases in maturation rate can be brought about by using smaller barrels with a higher surface-area-to-volume ratio. Whiskies matured in hotter climates and smaller barrels can be brought to market even more quickly than NAS Scotch matured in the cooler Scottish climate, and may well represent a more authentic replication of an older barrel. 

“Whiskies” that can be manufactured even more quickly may also be on the horizon. Some startups in the United States are experimenting with rapid-aging technology which would allow them to produce a whisky-like spirit in a very short amount of time. As detailed in a recent article in The Economist, Endless West in California is using technology that ages spirits within 24 hours, with the resulting bottles selling for $40 – a bit less than many 12-year-old Scotches. Although attempts to break the conventional maturation process are nothing new, recent attempts have won awards in blind taste-test competitions.

None of this is to dismiss Salop’s underlying point. But it may suggest that, even for a product where time appears to be an insurmountable barrier to entry, there may be more ways to compete than we initially assume.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

ICLE at the Oxford Union

Sam Bowman —  13 July 2021

Earlier this year, the International Center for Law & Economics (ICLE) hosted a conference with the Oxford Union on the themes of innovation, competition, and economic growth with some of our favorite scholars. Though attendance at the event itself was reserved for Oxford Union members, videos from that day are now available for everyone to watch.

Charles Goodhart and Manoj Pradhan on demographics and growth

Charles Goodhart, of Goodhart’s Law fame, and Manoj Pradhan discussed the relationship between demographics and growth, and argued that an aging global population could mean higher inflation and interest rates sooner than many imagine.

Catherine Tucker on privacy and innovation — is there a trade-off?

Catherine Tucker of the Massachusetts Institute of Technology discussed the costs and benefits of privacy regulation with ICLE’s Sam Bowman, and considered whether we face a trade-off between privacy and innovation online and in the fight against COVID-19.

Don Rosenberg on the political and economic challenges facing a global tech company in 2021

Qualcomm’s General Counsel Don Rosenberg, formerly of Apple and IBM, discussed the political and economic challenges facing a global tech company in 2021, as well as dealing with China while working in one of the most strategically vital industries in the world.

David Teece on the dynamic capabilities framework

David Teece explained the dynamic capabilities framework, a way of understanding business strategy and behavior in an uncertain world.

Vernon Smith in conversation with Shruti Rajagopalan on what we still have to learn from Adam Smith

Nobel laureate Vernon Smith discussed the enduring insights of Adam Smith with the Mercatus Center’s Shruti Rajagopalan.

Samantha Hoffman, Robert Atkinson and Jennifer Huddleston on American and Chinese approaches to tech policy in the 2020s

The final panel, with the Information Technology and Innovation Foundation’s President Robert Atkinson, the Australian Strategic Policy Institute’s Samantha Hoffman, and the American Action Forum’s Jennifer Huddleston, discussed the role that tech policy in the U.S. and China plays in the geopolitics of the 2020s.

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

Advocates of legislative action to “reform” antitrust law have already pointed to the U.S. District Court for the District of Columbia’s dismissal of the state attorneys general’s case and the “conditional” dismissal of the Federal Trade Commission’s case against Facebook as evidence that federal antitrust case law is lax and demands correction. In fact, the court’s decisions support the opposite implication. 

The Risks of Antitrust by Anecdote

The failure of a well-resourced federal regulator, and more than 45 state attorney-general offices, to avoid dismissal at an early stage of the litigation testifies to the dangers posed by a conclusory approach toward antitrust enforcement that seeks to unravel acquisitions consummated almost a decade ago without even demonstrating the factual predicates to support consideration of such far-reaching interventions. The dangers to the rule of law are self-evident. Irrespective of one’s views on the appropriate direction of antitrust law, this shortcut approach would substitute prosecutorial fiat, ideological predilection, and popular sentiment for decades of case law and agency guidelines grounded in the rigorous consideration of potential evidence of competitive harm. 

The paucity of empirical support for the exceptional remedial action sought by the FTC is notable. As the district court observed, there was little systematic effort made to define the economically relevant market or provide objective evidence of market power, beyond the assertion that Facebook has a market share of “in excess of 60%.” Remarkably, the denominator behind that 60%-plus assertion is not precisely defined, since the FTC’s brief does not supply any clear metric by which to measure market share. As the court pointed out, this is a nontrivial task in multi-sided environments in which one side of the potentially relevant market delivers services to users at no charge.  

While the point may seem uncontroversial, it is important to re-appreciate why insisting on a rigorous demonstration of market power is critical to preserving a coherent body of law that provides the market with a basis for reasonably anticipating the likelihood of antitrust intervention. At least since the late 1970s, courts have recognized that “big is not always bad” and can often yield cost savings that ultimately redound to consumers’ benefit. That is: firm size and consumer welfare do not stand in inherent opposition. If courts were to abandon safeguards against suits that cannot sufficiently define the relevant market and plausibly show market power, antitrust litigation could easily be used as a tool to punish successful firms that prevail over competitors simply by being more efficient. In other words: antitrust law could become a tool to preserve competitor welfare at the expense of consumer welfare.

The Specter of No-Fault Antitrust Liability

The absence of any specific demonstration of market power suggests deficient lawyering or the inability to gather supporting evidence. Giving the FTC litigation team the benefit of the doubt, the latter becomes the stronger possibility. If that is the case, this implies an effort to persuade courts to adopt a de facto rule of per se illegality for any firm that achieves a certain market share. (The same concept lies behind legislative proposals to bar acquisitions for firms that cross a certain revenue or market capitalization threshold.) Effectively, any firm that reached a certain size would operate under the presumption that it has market power and has secured or maintained such power due to anticompetitive practices, rather than business prowess. This would effectively convert leading digital platforms into quasi-public utilities subject to continuous regulatory intervention. Such an approach runs counter to antitrust law’s mission to preserve, rather than displace, private ordering by market forces.  

Even at the high-water point of post-World War II antitrust zealotry (a period that ultimately ended in economic malaise), proposals to adopt a rule of no-fault liability for alleged monopolization were rejected. This was for good reason. Any such rule would likely injure consumers by precluding them from enjoying the cost savings that result from the “sweet spot” scenario in which the scale and scope economies of large firms are combined with sufficiently competitive conditions to yield reduced prices and increased convenience for consumers. Additionally, any such rule would eliminate incumbents’ incentives to work harder to offer consumers reduced prices and increased convenience, since any market share preserved or acquired as a result would simply invite antitrust scrutiny as a reward.

Remembering Why Market Power Matters

To be clear, this is not to say that “Big Tech” does not deserve close antitrust scrutiny, does not wield market power in certain segments, or has not potentially engaged in anticompetitive practices.  The fundamental point is that assertions of market power and anticompetitive conduct must be demonstrated, rather than being assumed or “proved” based largely on suggestive anecdotes.  

Perhaps market power will be shown sufficiently in Facebook’s case if the FTC elects to respond to the court’s invitation to resubmit its brief with a plausible definition of the relevant market and indication of market power at this stage of the litigation. If that threshold is satisfied, then thorough consideration of the allegedly anticompetitive effect of Facebook’s WhatsApp and Instagram acquisitions may be merited. However, given the policy interest in preserving the market’s confidence in relying on the merger-review process under the Hart-Scott-Rodino Act, the burden of proof on the government should be appropriately enhanced to reflect the significant time that has elapsed since regulatory decisions not to intervene in those transactions.  

It would once have seemed mundane to reiterate that market power must be reasonably demonstrated to support a monopolization claim that could lead to a major divestiture remedy. Given the populist thinking that now leads much of the legislative and regulatory discussion on antitrust policy, it is imperative to reiterate the rationale behind this elementary principle. 

This principle reflects the fact that, outside collusion scenarios, antitrust law is typically engaged in a complex exercise to balance the advantages of scale against the risks of anticompetitive conduct. At its best, antitrust law weighs competing facts in a good faith effort to assess the net competitive harm posed by a particular practice. While this exercise can be challenging in digital markets that naturally converge upon a handful of leading platforms or multi-dimensional markets that can have offsetting pro- and anti-competitive effects, these are not reasons to treat such an exercise as an anachronistic nuisance. Antitrust cases are inherently challenging and proposed reforms to make them easier to win are likely to endanger, rather than preserve, competitive markets.