Federal Trade Commission (FTC) Chair Lina Khan missed the mark once again in her May 6 speech on merger policy, delivered at the annual meeting of the International Competition Network (ICN). At a time when the FTC and U.S. Justice Department (DOJ) are presumably evaluating responses to the agencies’ “request for information” on possible merger-guideline revisions (see here, for example), Khan’s recent remarks suggest a predetermination that merger policy must be “toughened” significantly to disincentivize a larger portion of mergers than under present guidance. A brief discussion of Khan’s substantively flawed remarks follows.
Khan’s remarks begin with a favorable reference to the tendentious statement from President Joe Biden’s executive order on competition that “broad government inaction has allowed far too many markets to become uncompetitive, with consolidation and concentration now widespread across our economy, resulting in higher prices, lower wages, declining entrepreneurship, growing inequality, and a less vibrant democracy.” The claim that “government inaction” has enabled increased market concentration and reduced competition has been shown to be inaccurate, and therefore cannot serve as a defensible justification for a substantive change in antitrust policy. Accordingly, Khan’s statement that the executive order “underscores a deep mandate for change and a commitment to creating the enabling environment for reform” rests on foundations of sand.
Khan then shifts her narrative to a consideration of merger policy, stating:
Merger investigations invite us to make a set of predictive assessments, and for decades we have relied on models that generally assumed markets are self-correcting and that erroneous enforcement is more costly than erroneous non-enforcement. Both the experience of the U.S. antitrust agencies and a growing set of empirical research is showing that these assumptions appear to have been at odds with market realities.
Khan argues, without explanation, that “the guidelines must better account for certain features of digital markets—including zero-price dynamics, the competitive significance of data, and the network externalities that can swiftly lead markets to tip.” She fails to make any showing that consumer welfare has been harmed by mergers involving digital markets, or that the “zero-price” feature is somehow troublesome. Moreover, the reference to “data” as being particularly significant to antitrust analysis appears to ignore research (see here) indicating there is an insufficient basis for having an antitrust presumption involving big data, and that big data (like R&D) may be associated with innovation, which enhances competitive vibrancy.
Khan also fails to note that network externalities are beneficial; when users are added to a digital platform, the platform’s value to other users increases (see here, for example). What’s more (see here), “gateways and multihoming can dissipate any monopoly power enjoyed by large networks[,] … provid[ing] another reason” why network effects may not raise competitive problems. In addition, the implicit notion that “tipping” is a particular problem is belied by the ability of new competitors to “knock off” supposed entrenched digital monopolists (think, for example, of Yahoo being displaced by Google, and Myspace being displaced by Facebook). Finally, a bit of regulatory humility is in order. Given the huge amount of consumer surplus generated by digital platforms (see here, for example), enforcers should be particularly cautious about avoiding more aggressive merger (and antitrust in general) policies that could detract from, rather than enhance, welfare.
Khan argues that guidelines drafters should “incorporate new learning” embodied in “empirical research [that] has shown that labor markets are highly concentrated” and a “U.S. Treasury [report] recently estimating that a lack of competition may be costing workers up to 20% of their wages.” Unfortunately for Khan’s argument, these claims have been convincingly debunked (see here) in a new study by former FTC economist Julie Carlson (see here). As Carlson carefully explains, labor markets are not highly concentrated and labor-market power is largely due to market frictions (such as occupational licensing), rather than concentration. In a similar vein, a recent article by Richard Epstein stresses that heightened antitrust enforcement in labor markets would involve “high administrative and compliance costs to deal with a largely nonexistent threat.” Epstein points out:
[T]raditional forms of antitrust analysis can perfectly deal with labor markets. … What is truly needed is a close examination of the other impediments to labor, including the full range of anticompetitive laws dealing with minimum wage, overtime, family leave, anti-discrimination, and the panoply of labor union protections, where the gains to deregulation should be both immediate and large.
[W]e are looking to sharpen our insights on non-horizontal mergers, including deals that might be described as ecosystem-driven, concentric, or conglomerate. While the U.S. antitrust agencies energetically grappled with some of these dynamics during the era of industrial-era conglomerates in the 1960s and 70s, we must update that thinking for the current economy. We must examine how a range of strategies and effects, including extension strategies and portfolio effects, may warrant enforcement action.
Khan’s statement on non-horizontal mergers once again is fatally flawed.
With regard to vertical mergers (not specifically mentioned by Khan), the FTC abruptly withdrew, without explanation, its approval of the carefully crafted 2020 vertical-merger guidelines. That action offends the rule of law, creating unwarranted and costly business-sector confusion. Khan’s lack of specific reference to vertical mergers does nothing to solve this problem.
With regard to other nonhorizontal mergers, there is no sound economic basis to oppose mergers involving unrelated products. Threatening to do so would have no procompetitive rationale and would threaten to reduce welfare by preventing the potential realization of efficiencies. In a 2020 OECD paper drafted principally by DOJ and FTC economists, the U.S. government meticulously assessed the case for challenging such mergers and rejected it on economic grounds. The OECD paper is noteworthy in its entirely negative assessment of 1960s and 1970s conglomerate cases which Khan implicitly praises in suggesting they merely should be “updated” to deal with the current economy (citations omitted):
Today, the United States is firmly committed to the core values that antitrust law protect competition, efficiency, and consumer welfare rather than individual competitors. During the ten-year period from 1965 to 1975, however, the Agencies challenged several mergers of unrelated products under theories that were antithetical to those values. The “entrenchment” doctrine, in particular, condemned mergers if they strengthened an already dominant firm through greater efficiencies, or gave the acquired firm access to a broader line of products or greater financial resources, thereby making life harder for smaller rivals. This approach is no longer viewed as valid under U.S. law or economic theory. …
These cases stimulated a critical examination, and ultimate rejection, of the theory by legal and economic scholars and the Agencies. In their Antitrust Law treatise, Phillip Areeda and Donald Turner showed that to condemn conglomerate mergers because they might enable the merged firm to capture cost savings and other efficiencies, thus giving it a competitive advantage over other firms, is contrary to sound antitrust policy, because cost savings are socially desirable. It is now recognized that efficiency and aggressive competition benefit consumers, even if rivals that fail to offer an equally “good deal” suffer loss of sales or market share. Mergers are one means by which firms can improve their ability to compete. It would be illogical, then, to prohibit mergers because they facilitate efficiency or innovation in production. Unless a merger creates or enhances market power or facilitates its exercise through the elimination of competition—in which case it is prohibited under Section 7—it will not harm, and more likely will benefit, consumers.
Given the well-reasoned rejection of conglomerate theories by leading antitrust scholars and modern jurisprudence, it would be highly wasteful for the FTC and DOJ to consider covering purely conglomerate (nonhorizontal and nonvertical) mergers in new guidelines. Absent new legislation, challenges of such mergers could be expected to fail in court. Regrettably, Khan appears oblivious to that reality.
Khan’s speech ends with a hat tip to internationalism and the ICN:
The U.S., of course, is far from alone in seeing the need for a course correction, and in certain regards our reforms may bring us in closer alignment with other jurisdictions. Given that we are here at ICN, it is worth considering how we, as an international community, can or should react to the shifting consensus.
Antitrust laws have been adopted worldwide, in large part at the urging of the United States (see here). They remain, however, national laws. One would hope that the United States, which in the past was the world leader in developing antitrust economics and enforcement policy, would continue to seek to retain this role, rather than merely emulate other jurisdictions to join an “international community” consensus. Regrettably, this does not appear to be the case. (Indeed, European Commissioner for Competition Margrethe Vestager made specific reference to a “coordinated approach” and convergence between U.S. and European antitrust norms in a widely heralded October 2021 speech at the annual Fordham Antitrust Conference in New York. And Vestager specifically touted European ex ante regulation as well as enforcement in a May 5 ICN speech that emphasized multinational antitrust convergence.)
Lina Khan’s recent ICN speech on merger policy sends all the wrong signals on merger guidelines revisions. It strongly hints that new guidelines will embody pre-conceived interventionist notions at odds with sound economics. By calling for a dramatically new direction in merger policy, it interjects uncertainty into merger planning. Due to its interventionist bent, Khan’s remarks, combined with prior statements by U.S. Assistant Attorney General Jonathan Kanter (see here) may further serve to deter potentially welfare-enhancing consolidations. Whether the federal courts will be willing to defer to a drastically different approach to mergers by the agencies (one at odds with several decades of a careful evolutionary approach, rooted in consumer welfare-oriented economics) is, of course, another story. Stay tuned.
A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.
It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:
How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?
Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).
When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.
As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.
The Shaky Foundations of Attention Markets Theory
Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.
First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).
There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:
This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”
Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:
But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.
The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.
The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.
None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.
First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.
This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.
This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.
Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.
This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.
Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:
An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.
In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.
In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.
The Bait and Switch: Qualitative Indicia
These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:
Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method.
Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.
This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”
This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.
A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences.
There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching.
The Way Forward
The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.
As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.
Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.
Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:
The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.
Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.
In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.
U.S. antitrust policy seeks to promote vigorous marketplace competition in order to enhance consumer welfare. For more than four decades, mainstream antitrust enforcers have taken their cue from the U.S. Supreme Court’s statement in Reiter v. Sonotone (1979) that antitrust is “a consumer welfare prescription.” Recent suggestions (see here and here) by new Biden administration Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) leadership that antitrust should promote goals apart from consumer welfare have yet to be embodied in actual agency actions, and they have not been tested by the courts. (Given Supreme Court case law, judicial abandonment of the consumer welfare standard appears unlikely, unless new legislation that displaces it is enacted.)
Assuming that the consumer welfare paradigm retains its primacy in U.S. antitrust, how do the goals of antitrust match up with those of national security? Consistent with federal government pronouncements, the “basic objective of U.S. national security policy is to preserve and enhance the security of the United States and its fundamental values and institutions.” Properly applied, antitrust can retain its consumer welfare focus in a manner consistent with national security interests. Indeed, sound antitrust and national-security policies generally go hand-in-hand. The FTC and the DOJ should keep that in mind in formulating their antitrust policies (spoiler alert: they sometimes have failed to do so).
At first blush, it would seem odd that enlightened consumer-welfare-oriented antitrust enforcement and national-security policy would be in tension. After all, enlightened antitrust enforcement is concerned with targeting transactions that harmfully reduce output and undermine innovation, such as hard-core collusion and courses of conduct that inefficiently exclude competition and weaken marketplace competition. U.S. national security would seem to be promoted (or, at least, not harmed) by antitrust enforcement directed at supporting stronger, more vibrant American markets.
This initial instinct is correct, if antitrust-enforcement policy indeed reflects economically sound, consumer-welfare-centric principles. But are there examples where antitrust enforcement falls short and thereby is at odds with national security? An evaluation of three areas of interaction between the two American policy interests is instructive.
The degree of congruence between national security and appropriate consumer welfare-enhancing antitrust enforcement is illustrated by a brief discussion of:
the intellectual property-antitrust interface, with a focus on patent licensing; and
proposed federal antitrust legislation.
The first topic presents an example of clear consistency between consumer-welfare-centric antitrust and national defense. In contrast, the second topic demonstrates that antitrust prosecutions (and policies) that inappropriately weaken intellectual-property protections are inconsistent with national defense interests. The second topic does not manifest a tension between antitrust and national security; rather, it illustrates a tension between national security and unsound antitrust enforcement. In a related vein, the third topic demonstrates how a change in the antitrust statutes that would undermine the consumer welfare paradigm would also threaten U.S. national security.
The consistency between antitrust goals and national security is relatively strong and straightforward in the field of defense-industry-related mergers and joint ventures. The FTC and DOJ traditionally have worked closely with the U.S. Defense Department (DOD) to promote competition and consumer welfare in evaluating business transactions that affect national defense needs.
The DOD has long supported policies to prevent overreliance on a single supplier for critical industrial-defense needs. Such a posture is consistent with the antitrust goal of preventing mergers to monopoly that reduce competition, raise prices, and diminish quality by creating or entrenching a dominant firm. As then-FTC Commissioner William Kovacic commented about an FTC settlement that permitted the United Launch Alliance (an American spacecraft launch service provider established in 2006 as a joint venture between Lockheed Martin and Boeing), “[i]n reviewing defense industry mergers, competition authorities and the DOD generally should apply a presumption that favors the maintenance of at least two suppliers for every weapon system or subsystem.”
Antitrust enforcers have, however, worked with DOD to allow the only two remaining suppliers of a defense-related product or service to combine their operations, subject to appropriate safeguards, when presented with scale economy and quality rationales that advanced national-security interests (see here).
Antitrust enforcers have also consulted and found common cause with DOD in opposing anticompetitive mergers that have national-security overtones. For example, antitrust enforcement actions targeting vertical defense-sector mergers that threaten anticompetitive input foreclosure or facilitate anticompetitive information exchanges are in line with the national-security goal of preserving vibrant markets that offer the federal government competitive, high-quality, innovative, and reasonably priced purchase options for its defense needs.
The FTC’s recent success in convincing Lockheed Martin to drop its proposed acquisition of Aerojet Rocketdyne holdings fits into this category. (I express no view on the merits of this matter; I merely cite it as an example of FTC-DOD cooperation in considering a merger challenge.) In its February 2022 press release announcing the abandonment of this merger, the FTC stated that “[t]he acquisition would have eliminated the country’s last independent supplier of key missile propulsion inputs and given Lockheed the ability to cut off its competitors’ access to these critical components.” The FTC also emphasized the full consistency between its enforcement action and national-security interests:
Simply put, the deal would have resulted in higher prices and diminished quality and innovation for programs that are critical to national security. The FTC’s enforcement action in this matter dovetails with the DoD report released this week recommending stronger merger oversight of the highly concentrated defense industrial base.
Shifts in government IP-antitrust patent-licensing policy perspectives
Standard setting through standard setting organizations (SSOs) has been a particularly important means of spawning valuable benchmarks (standards) that have enabled new patent-backed technologies to drive innovation and enable mass distribution of new high-tech products, such as smartphones. The licensing of patents that cover and make possible valuable standards—“standard-essential patents” or SEPs—has played a crucial role in bringing to market these products and encouraging follow-on innovations that have driven fast-paced welfare-enhancing product and process quality improvements.
Licensing, cross-licensing, or otherwise transferring intellectual property (hereinafter “licensing”) can facilitate integration of the licensed property with complementary factors of production. This integration can lead to more efficient exploitation of the intellectual property, benefiting consumers through the reduction of costs and the introduction of new products. Such arrangements increase the value of intellectual property to consumers and owners. Licensing can allow an innovator to capture returns from its investment in making and developing an invention through royalty payments from those that practice its invention, thus providing an incentive to invest in innovative efforts. …
[L]imitations on intellectual property licenses may serve procompetitive ends by allowing the licensor to exploit its property as efficiently and effectively as possible. These various forms of exclusivity can be used to give a licensee an incentive to invest in the commercialization and distribution of products embodying the licensed intellectual property and to develop additional applications for the licensed property. The restrictions may do so, for example, by protecting the licensee against free riding on the licensee’s investments by other licensees or by the licensor. They may also increase the licensor’s incentive to license, for example, by protecting the licensor from competition in the licensor’s own technology in a market niche that it prefers to keep to itself.
Unfortunately, however, FTC and DOJ antitrust policies over the last 15 years have too often belied this generally favorable view of licensing practices with respect to SEPs. (See generally here, here, and here). Notably, the antitrust agencies have at various times taken policy postures and enforcement actions indicating that SEP holders may face antitrust challenges if:
they fail to license all comers, including competitors, on fair, reasonable, and nondiscriminatory (FRAND) terms; and
seek to obtain injunctions against infringers.
In addition, antitrust policy officials (see 2011 FTC Report) have described FRAND price terms as cabined by the difference between the licensing rates for the first (included in the standard) and second (not included in the standard) best competing patented technologies available prior to the adoption of a standard. This pricing measure—based on the “incremental difference” between first and second-best technologies—has been described as necessary to prevent SEP holders from deriving artificial “monopoly rents” that reflect the market power conferred by a standard. (But see then FTC-Commissioner Joshua Wright’s 2013 essay to the contrary, based on the economics of incomplete contracts.)
This approach to SEPs undervalues them, harming the economy. Limitations on seeking injunctions (which are a classic property-right remedy) encourages opportunistic patent infringements and artificially disfavors SEP holders in bargaining over licensing terms with technology implementers, thereby reducing the value of SEPs. SEP holders are further disadvantaged by the presumption that they must license all comers. They also are harmed by the implication that they must be limited to a relatively low hypothetical “ex ante” licensing rate—a rate that totally fails to take into account the substantial economic welfare value that will accrue to the economy due to their contribution to the standard. Considered individually and as a whole, these negative factors discourage innovators from participating in standardization, to the detriment of standards quality. Lower-quality standards translate into inferior standardized produces and processes and reduced innovation.
Recognizing this problem, in 2018 DOJ, Assistant Attorney General for Antitrust Makan Delrahim announced a “New Madison Approach” (NMA) to SEP licensing, which recognized:
antitrust remedies are inappropriate for patent-licensing disputes between SEP-holders and implementers of a standard;
SSOs should not allow collective actions by standard-implementers to disfavor patent holders;
SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents by seeking injunctions; and
unilateral and unconditional decisions not to license a patent should be per se legal. (See, for example, here and here.)
Acceptance of the NMA would have counter-acted the economically harmful degradation of SEPs stemming from prior government policies.
Regrettably, antitrust-enforcement-agency statements during the last year effectively have rejected the NMA. Most recently, in December 2021, the DOJ issued for public comment a Draft Policy Statement on Licensing Negotiations and Remedies, SEPs, which displaces a 2019 statement that had been in line with the NMA. Unless the FTC and Biden DOJ rethink their new position and decide instead to support the NMA, the anti-innovation approach to SEPs will once again prevail, with unfortunate consequences for American innovation.
The “weaker patents” implications of the draft policy statement would also prove detrimental to national security, as explained in a comment on the statement by a group of leading law, economics, and business scholars (including Nobel Laureate Vernon Smith) convened by the International Center for Law & Economics:
China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights. …
Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.
A Center for Security and International Studies submission on the draft policy statement (signed by a former deputy secretary of the DOD, as well as former directors of the U.S. Patent and Trademark Office and the National Institute of Standards and Technology) also raised China-related national-security concerns:
[T]he largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.
Furthermore, in a more general vein, leading innovation economist David Teece also noted the negative national-security implications in his submission on the draft policy statement:
The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation. … Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.
That’s not all. In its public comment warning against precipitous finalization of the draft policy statement, the Innovation Alliance noted that, in recent years, major foreign jurisdictions have rejected the notion that SEP holders should be deprived the opportunity to seek injunctions. The Innovation Alliance opined in detail on the China national-security issues (footnotes omitted):
[T]he proposed shift in policy will undermine the confidence and clarity necessary to incentivize investments in important and risky research and development while simultaneously giving foreign competitors who do not rely on patents to drive investment in key technologies, like China, a distinct advantage. …
The draft policy statement … would devalue SEPs, and undermine the ability of U.S. firms to invest in the research and development needed to maintain global leadership in 5G and other critical technologies.
Without robust American investments, China—which has clear aspirations to control and lead in critical standards and technologies that are essential to our national security—will be left without any competition. Since 2015, President Xi has declared “whoever controls the standards controls the world.” China has rolled out the “China Standards 2035” plan and has outspent the United States by approximately $24 billion in wireless communications infrastructure, while China’s five-year economic plan calls for $400 billion in 5G-related investment.
Simply put, the draft policy statement will give an edge to China in the standards race because, without injunctions, American companies will lose the incentive to invest in the research and development needed to lead in standards setting. Chinese companies, on the other hand, will continue to race forward, funded primarily not by license fees, but by the focused investment of the Chinese government. …
Public hearings are necessary to take into full account the uncertainty of issuing yet another policy on this subject in such a short time period.
A key part of those hearings and further discussions must be the national security implications of a further shift in patent enforceability policy. Our future safety depends on continued U.S. leadership in areas like 5G and artificial intelligence. Policies that undermine the enforceability of patent rights disincentivize the substantial private sector investment necessary for research and development in these areas. Without that investment, development of these key technologies will begin elsewhere—likely China. Before any policy is accepted, key national-security stakeholders in the U.S. government should be asked for their official input.
These are not the only comments that raised the negative national-security ramifications of the draft policy statement (see here and here). For example, current Republican and Democratic senators, prior International Trade Commissioners, and former top DOJ and FTC officials also noted concerns. What’s more, the Patent Protection Society of China, which represents leading Chinese corporate implementers, filed a rather nonanalytic submission in favor of the draft statement. As one leading patent-licensing lawyer explains: “UC Berkley Law Professor Mark Cohen, whose distinguished government service includes serving as the USPTO representative in China, submitted a thoughtful comment explaining how the draft Policy Statement plays into China’s industrial and strategic interests.”
Finally, by weakening patent protection, the draft policy statement is at odds with the 2021 National Security Commission on Artificial Intelligence Report, which called for the United States to “[d]evelop and implement national IP policies to incentivize, expand, and protect emerging technologies[,]” in response to Chinese “leveraging and exploiting intellectual property (IP) policies as a critical tool within its national strategies for emerging technologies.”
In sum, adoption of the draft policy statement would raise antitrust risks, weaken key property rights protections for SEPs, and undercut U.S. technological innovation efforts vis-à-vis China, thereby undermining U.S. national security.
FTC v. Qualcomm: Misguided enforcement and national security
U.S. national-security interests have been threatened by more than just the recent SEP policy pronouncements. In filing a January 2017 antitrust suit (at the very end of the Obama administration) against Qualcomm’s patent-licensing practices, the FTC (by a partisan 2-1 vote) ignored the economic efficiencies that underpinned this highly successful American technology company’s practices. Had the suit succeeded, U.S. innovation in a critically important technology area would have needlessly suffered, with China as a major beneficiary. A recent Federalist Society Regulatory Transparency Project report on the New Madison Approach underscored the broad policy implications of FTC V. Qualcomm (citations deleted):
The FTC’s Qualcomm complaint reflected the anti-SEP bias present during the Obama administration. If it had been successful, the FTC’s prosecution would have seriously undermined the freedom of the company to engage in efficient licensing of its SEPs.
Qualcomm is perhaps the world’s leading wireless technology innovator. It has developed, patented, and licensed key technologies that power smartphones and other wireless devices, and continues to do so. Many of Qualcomm’s key patents are SEPs subject to FRAND, directed to communications standards adopted by wireless devices makers. Qualcomm also makes computer processors and chips embodied in cutting edge wireless devices. Thanks in large part to Qualcomm technology, those devices have improved dramatically over the last decade, offering consumers a vast array of new services at a lower and lower price, when quality is factored in. Qualcomm thus is the epitome of a high tech American success story that has greatly benefited consumers.
Qualcomm: (1) sells its chips to “downstream” original equipment manufacturers (OEMs, such as Samsung and Apple), on the condition that the OEMs obtain licenses to Qualcomm SEPs; and (2) refuses to license its FRAND-encumbered SEPs to rival chip makers, while allowing those rivals to create and sell chips embodying Qualcomm SEP technologies to those OEMS that have entered a licensing agreement with Qualcomm.
The FTC’s 2017 antitrust complaint, filed in federal district court in San Francisco, charged that Qualcomm’s “no license, no chips” policy allegedly “forced” OEM cell phone manufacturers to pay elevated royalties on products that use a competitor’s baseband processors. The FTC deemed this an illegal “anticompetitive tax” on the use of rivals’ processors, since phone manufacturers “could not run the risk” of declining licenses and thus losing all access to Qualcomm’s processors (which would be needed to sell phones on important cellular networks). The FTC also argued that Qualcomm’s refusal to license its rivals despite its SEP FRAND commitment violated the antitrust laws. Finally, the FTC asserted that a 2011-2016 Qualcomm exclusive dealing contract with Apple (in exchange for reduced patent royalties) had excluded business opportunities for Qualcomm competitors.
The federal district court held for the FTC. It ordered that Qualcomm end these supposedly anticompetitive practices and renegotiate its many contracts. [Among the beneficiaries of new pro-implementer contract terms would have been a leading Chinese licensee of Qualcomm’s, Huawei, the huge Chinese telecommunications company that has been accused by the U.S. government of using technological “back doors” to spy on the United States.]
Qualcomm appealed, and in August 2020 a panel of the Ninth Circuit Court of Appeals reversed the district court, holding for Qualcomm. Some of the key points underlying this holding were: (1) Qualcomm had no antitrust duty to deal with competitors, consistent with established Supreme Court precedent (a very narrow exception to this precedent did not apply); (2) Qualcomm’s rates were chip supplier neutral because all OEMs paid royalties, not just rivals’ customers; (3) the lower court failed to show how the “no license, no chips” policy harmed Qualcomm’s competitors; and (4) Qualcomm’s agreements with Apple did not have the effect of substantially foreclosing the market to competitors. The Ninth Circuit as a whole rejected the FTC’s “en banc” appeal for review of the panel decision.
The appellate decision in Qualcomm largely supports pillar four of the NMA, that unilateral and unconditional decisions not to license a patent should be deemed legal under the antitrust laws. More generally, the decision evinces a refusal to find anticompetitive harm in licensing markets without hard empirical support. The FTC and the lower court’s findings of “harm” had been essentially speculative and anecdotal at best. They had ignored the “big picture” that the markets in which Qualcomm operates had seen vigorous competition and the conferral of enormous and growing welfare benefits on consumers, year-by-year. The lower court and the FTC had also turned a deaf ear to a legitimate efficiency-related business rationale that explained Qualcomm’s “no license, no chips” policy – a fully justifiable desire to obtain a fair return on Qualcomm’s patented technology.
Qualcomm is well reasoned, and in line with sound modern antitrust precedent, but it is only one holding. The extent to which this case’s reasoning proves influential in other courts may in part depend on the policies advanced by DOJ and the FTC going forward. Thus, a preliminary examination of the Biden administration’s emerging patent-antitrust policy is warranted. [Subsequent discussion shows that the Biden administration apparently has rejected pro-consumer policies embodied in the 9th U.S. Circuit’s Qualcomm decision and in the NMA.]
Although the 9th Circuit did not comment on them, national-security-policy concerns weighed powerfully against the FTC v. Qualcomm suit. In a July 2019 Statement of Interest (SOI) filed with the circuit court, DOJ cogently set forth the antitrust flaws in the district court’s decision favoring the FTC. Furthermore, the SOI also explained that “the public interest” favored a stay of the district court holding, due to national-security concerns (described in some detail in statements by the departments of Defense and Energy, appended to the SOI):
[T]he public interest also takes account of national security concerns. Winter v. NRDC, 555 U.S. 7, 23-24 (2008). This case presents such concerns. In the view of the Executive Branch, diminishment of Qualcomm’s competitiveness in 5G innovation and standard-setting would significantly impact U.S. national security. A251-54 (CFIUS); LD ¶¶10-16 (Department of Defense); ED ¶¶9-10 (Department of Energy). Qualcomm is a trusted supplier of mission-critical products and services to the Department of Defense and the Department of Energy. LD ¶¶5-8; ED ¶¶8-9. Accordingly, the Department of Defense “is seriously concerned that any detrimental impact on Qualcomm’s position as global leader would adversely affect its ability to support national security.” LD ¶16.
The [district] court’s remedy [requiring the renegotiation of Qualcomm’s licensing contracts] is intended to deprive, and risks depriving, Qualcomm of substantial licensing revenue that could otherwise fund time-sensitive R&D and that Qualcomm cannot recover later if it prevails. See, e.g., Op. 227-28. To be sure, if Qualcomm ultimately prevails, vacatur of the injunction will limit the severity of Qualcomm’s revenue loss and the consequent impairment of its ability to perform functions critical to national security. The Department of Defense “firmly believes,” however, “that any measure that inappropriately limits Qualcomm’s technological leadership, ability to invest in [R&D], and market competitiveness, even in the short term, could harm national security. The risks to national security include the disruption of [the Department’s] supply chain and unsure U.S. leadership in 5G.” LD ¶3. Consequently, the public interest necessitates a stay pending this Court’s resolution of the merits. In these rare circumstances, the interest in preventing even a risk to national security—“an urgent objective of the highest order”—presents reason enough not to enforce the remedy immediately. Int’l Refugee Assistance Project, 137 S. Ct. at 2088 (internal quotations omitted).
Not all national-security arguments against antitrust enforcement may be well-grounded, of course. The key point is that the interests of national security and consumer-welfare-centric antitrust are fully aligned when antitrust suits would inefficiently undermine the competitive vigor of a firm or firms that play a major role in supporting U.S. national-security interests. Such was the case in FTC v. Qualcomm. More generally, heightened antitrust scrutiny of efficient patent-licensing practices (as threatened by the Biden administration) would tend to diminish innovation by U.S. patentees, particularly in areas covered by standards that are key to leading global technologies. Such a diminution in innovation will tend to weaken American advantages in important industry sectors that are vital to U.S. national-security interests.
Proposed Federal Antitrust Legislation
Proposed federal antitrust legislation being considered by Congress (see here, here, and here for informed critiques) would prescriptively restrict certain large technology companies’ business transactions. If enacted, such legislation would thereby preclude case-specific analysis of potential transaction-specific efficiencies, thereby undermining the consumer welfare standard at the heart of current sound and principled antitrust enforcement. The legislation would also be at odds with our national-security interests, as a recent U.S. Chamber of Commerce paper explains:
Congress is considering new antitrust legislation which, perversely, would weaken leading U.S. technology companies by crafting special purpose regulations under the guise of antitrust to prohibit those firms from engaging in business conduct that is widely acceptable when engaged in by rival competitors.
A series of legislative proposals – some of which already have been approved by relevant Congressional committees – would, among other things: dismantle these companies; prohibit them from engaging in significant new acquisitions or investments; require them to disclose sensitive user data and sensitive IP and trade secrets to competitors, including those that are foreign-owned and controlled; facilitate foreign influence in the United States; and compromise cybersecurity. These bills would fundamentally undermine American security interests while exempting from scrutiny Chinese and other foreign firms that do not meet arbitrary user and market capitalization thresholds specified in the legislation. …
The United States has never used legislation to punish success. In many industries, scale is important and has resulted in significant gains for the American economy, including small businesses. U.S. competition law promotes the interests of consumers, not competitors. It should not be used to pick winners and losers in the market or to manage competitive outcomes to benefit select competitors. Aggressive competition benefits consumers and society, for example by pushing down prices, disrupting existing business models, and introducing innovative products and services.
If enacted, the legislative proposals would drag the United States down in an unfolding global technological competition. Companies captured by the legislation would be required to compete against integrated foreign rivals with one hand tied behind their backs. Those firms that are the strongest drivers of U.S. innovation in AI, quantum computing, and other strategic technologies would be hamstrung or even broken apart, while foreign and state-backed producers of these same technologies would remain unscathed and seize the opportunity to increase market share, both in the U.S. and globally. …
Instead of warping antitrust law to punish a discrete group of American companies, the U.S. government should focus instead on vigorous enforcement of current law and on vocally opposing and effectively countering foreign regimes that deploy competition law and other legal and regulatory methods as industrial policy tools to unfairly target U.S. companies. The U.S. should avoid self-inflicted wounds to our competitiveness and national security that would result from turning antitrust into a weapon against dynamic and successful U.S. firms.
Consistent with this analysis, former Obama administration Defense Secretary Leon Panetta and former Trump administration Director of National Intelligence Dan Coats argued in a letter to U.S. House leadership (see here) that “imposing severe restrictions solely on U.S. giants will pave the way for a tech landscape dominated by China — echoing a position voiced by the Big Tech companies themselves.”
The national-security arguments against current antitrust legislative proposals, like the critiques of the unfounded FTC v. Qualcomm case, represent an alignment between sound antitrust policy and national-security analysis. Unfounded antitrust attacks on efficient business practices by large firms that help maintain U.S. technological leadership in key areas undermine both principled antitrust and national security.
Enlightened antitrust enforcement, centered on consumer welfare, can and should be read in a manner that is harmonious with national-security interests.
The cooperation between U.S. federal antitrust enforcers and the DOD in assessing defense-industry mergers and joint ventures is, generally speaking, an example of successful harmonization. This success reflects the fact that antitrust enforcers carry out their reviews of those transactions with an eye toward accommodating efficiencies that advance defense goals without sacrificing consumer welfare. Close antitrust-agency consultation with DOD is key to that approach.
Unfortunately, federal enforcement directed toward efficient intellectual-property licensing, as manifested in the Qualcomm case, reflects a disharmony between antitrust and national security. This disharmony could be eliminated if DOJ and the FTC adopted a dynamic view of intellectual property and the substantial economic-welfare benefits that flow from restrictive patent-licensing transactions.
In sum, a dynamic analysis reveals that consumer welfare is enhanced, not harmed, by not subjecting such licensing arrangements to antitrust threat. A more permissive approach to licensing is thus consistent with principled antitrust and with the national security interest of protecting and promoting strong American intellectual property (and, in particular, patent) protection. The DOJ and the FTC should keep this in mind and make appropriate changes to their IP-antitrust policies forthwith.
Finally, proposed federal antitrust legislation would bring about statutory changes that would simultaneously displace consumer welfare considerations and undercut national security interests. As such, national security is supported by rejecting unsound legislation, in order to keep in place consumer-welfare-based antitrust enforcement.
The acceptance and implementation of due-process standards confer a variety of welfare benefits on society. As Christopher Yoo, Thomas Fetzer, Shan Jiang, and Yong Huang explain, strong procedural due-process protections promote: (1) compliance with basic norms of impartiality; (2) greater accuracy of decisions; (3) stronger economic growth; (4) increased respect for government; (5) better compliance with the law; (6) better control of the bureaucracy; (7) restraints on the influence of special-interest groups; and (8) reduced corruption.
Recognizing these benefits (and consistent with the long Anglo-American tradition of recognizing due-process rights that dates back to Magna Carta), the U.S. government (USG) has long been active in advancing the adoption of due-process principles by competition-law authorities around the world, working particularly through the Organisation for Economic Co-operation and Development (OECD) and the International Competition Network (ICN). More generally, due process may be seen as an aspect of the rule of law, which is as important in antitrust as in other legal areas.
The USG has supported OECD Competition Committee work on due-process safeguards which began in 2010, and which culminated in the OECD ministers’ October 2021 adoption of a “Recommendation on Transparency and Procedural Fairness in Competition Law Enforcement.” This recommendation calls for: (1) competition and predictability in competition-law enforcement; (2) independence, impartiality, and professionalism of competition authorities; (3) non-discrimination, proportionality, and consistency in the treatment of parties subject to scrutiny; (4) timeliness in handling cases; (5) meaningful engagement with parties (including parties’ right to respond and be heard); (6) protection of confidential and privileged information; (7) impartial judicial review of enforcement decisions; and (8) periodic review of policies, rules, procedures, and guidelines, to ensure that they are aligned with the preceding seven principles.
The USG has also worked through the International Competition Network (ICN) to generate support for the acceptance of due-process principles by ICN member competition agencies and their governments. In describing ICN due-process initiatives, James Rill and Jana Seidl have explained that “[t]he current challenge is to determine the extent to which the ICN, as a voluntary organization, can or should establish mechanisms to evaluate implementation of … [due process] norms by its members and even non-members.”
In 2019, the ICN announced creation of a Framework for Competition Agency Procedures (CAP), open to both ICN and non-ICN national and multinational (most prominently, the EU’s Directorate General for Competition) competition agencies. The CAP essentially embodied the principles of a June 2018 U.S. Justice Department (DOJ) framework proposal. A September 2021 CAP Report (footnotes omitted) issued at an ICN steering-group meeting noted that the CAP had 73 members, and summarized the history and goals of the CAP as follows:
The ICN CAP is a non-binding, opt-in framework. It makes use of the ICN infrastructure to maximize visibility and impact while minimizing the administrative burden for participants that operate in different legal regimes and enforcement systems with different resource constraints. The ICN CAP promotes agreement among competition agencies worldwide on fundamental procedural norms. The Multilateral Framework for Procedures project, launched by the US Department of Justice in 2018, was the starting point for what is now the ICN CAP.
The ICN CAP rests on two pillars: the first pillar is a catalogue of fundamental, consensus principles for fair and effective agency procedures that reflect the broad consensus within the global competition community. The principles address: non-discrimination, transparency, notice of investigations, timely resolution, confidentiality protections, conflicts of interest, opportunity to defend, representation, written decisions, and judicial review.
The second pillar of the ICN CAP consists of two processes: the “CAP Cooperation Process,” which facilitates a dialogue between participating agencies, and the “CAP Review Process,” which enhances transparency about the rules governing participants’ investigation and enforcement procedures.
The ICN CAP template is the practical implementation tool for the CAP. Participants each submit CAP templates, outlining how their agencies adhere to each of the CAP principles. The templates allow participants to share and explain important features of their systems, including links and other references to related materials such as legislation, rules, regulations, and guidelines. The CAP templates are a useful resource for agencies to consult when they would like to gain a quick overview of other agencies’ procedures, benchmark with peer agencies, and develop new processes and procedures.
Through the two pillars and the template, the CAP provides a framework for agencies to affirm the importance of the CAP principles, to confer with other jurisdictions, and to illustrate how their regulations and guidelines adhere to those principles.
In short, the overarching goal of the ICN CAP is to give agencies a “nudge” to implement due-process principles by encouraging consultation with peer CAP members and exposing to public view agencies’ actual due-process record. The extent to which agencies will prove willing to strengthen their commitment to due process because of the CAP, or even join the CAP, remains to be seen. (China’s competition agency, the State Administration for Market Regulation (SAMR), has not joined the ICN CAP.)
Antitrust, Due Process, and the Rule of Law at the DOJ and the FTC
Now that the ICN CAP and OECD recommendation are in place, it is important that the DOJ and Federal Trade Commission (FTC), as long-time international promoters of due process, lead by example in adhering to all of those multinational instruments’ principles. A failure to do so would, in addition to having negative welfare consequences for affected parties (and U.S. economic welfare), undermine USG international due-process advocacy. Less effective advocacy efforts could, of course, impose additional costs on American businesses operating overseas, by subjecting them to more procedurally defective foreign antitrust prosecutions than otherwise.
With those considerations in mind, let us briefly examine the current status of due-process protections afforded by the FTC and DOJ. Although traditionally robust procedural safeguards remain strong overall, some worrisome developments during the first year of the Biden administration merit highlighting. Those developments implicate classic procedural issues and some broader rule of law concerns. (This commentary does not examine due-process and rule-of-law issues associated with U.S. antitrust enforcement at the state level, a topic that warrants scrutiny as well.)
New FTC leadership has taken several actions that have unfortunate due-process and rule-of-law implications (many of them through highly partisan 3-2 commission votes featuring strong dissents).
Consider the HSR Act, a Congressional compromise that gave enforcers advance notice of deals and parties the benefit of repose. HSR review [at the FTC] now faces death by a thousand cuts. We have hit month nine of a “temporary” and “brief” suspension of early termination. Letters are sent to parties when their waiting periods expire, warning them to close at their own risk. Is the investigation ongoing? Is there a set amount of time the parties should wait? No one knows! The new prior approval policy will flip the burden of proof and capture many deals below statutory thresholds. And sprawling investigations covering non-competition concerns exceed our Clayton Act authority.
These policy changes impose a gratuitous tax on merger activity – anticompetitive and procompetitive alike. There are costs to interfering with the market for corporate control, especially as we attempt to rebound from the pandemic. If new leadership wants the HSR Act rewritten, they should persuade Congress to amend it rather than taking matters into their own hands.
Uncertainty and delay surrounding merger proposals and new merger-review processes that appear to flaunt tension with statutory commands are FTC “innovations” that are in obvious tension with due-process guarantees.
FTC rulemaking initiatives have due-process and rule-of-law problems. As Commissioner Wilson noted (footnotes omitted), “[t]he [FTC] majority changed our rules of practice to limit stakeholder input and consolidate rulemaking power in the chair’s office. In Commissioner [Noah] Phillips’ words, these changes facilitate more rules, but not better ones.” Lack of stakeholder input offends due process. Even more serious, however, is the fact that far-reaching FTC competition rules are being planned (see the December 2021 FTC Statement of Regulatory Priorities). FTC competition rulemaking is likely beyond its statutory authority and would fail a cost-benefit analysis (see here). Moreover, even if competition rules survived, they would offend the rule of law (see here) by “lead[ing] to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency.”
The FTC’s July 2021 withdrawal of its 2015 “Statement of Enforcement Principles Regarding ‘Unfair Methods of Competition’ [UMC] Under Section 5 of the FTC Act” likewise undercuts the rule of law (see here). The 2015 Statement had tended to increase predictability in enforcement by tying the FTC’s exercise of its UMC authority to well-understood antitrust rule-of-reason principles and the generally accepted consumer welfare standard. By withdrawing the statement (over the dissents of Commissioners Wilson and Phillips) without promulgating a new policy, the FTC majority reduced enforcement guidance and generated greater legal uncertainty. The notion that the FTC may apply the UMC concept in an unbounded fashion lacks legal principle and threatens to chill innovative and welfare-enhancing business conduct.
Finally, the FTC’s abrupt September 2021 withdrawal of its approval of jointly issued 2020 DOJ-FTC Vertical Merger Guidelines (again over a dissent by Commissioners Wilson and Phillips), offends the rule of law in three ways. As Commissioner Wilson explains, it engenders confusion as to FTC policies regarding vertical-merger analysis going forward; it appears to reflect flawed economic thinking regarding vertical integration (which may in turn lead to enforcement error); and it creates a potential tension between DOJ and FTC approaches to vertical acquisitions (the third concern may disappear if and when DOJ and FTC agree to new merger guidelines).
As of now, the Biden administration DOJ has not taken as many actions that implicate rule-of-law and due-process concerns. Two recent initiatives with significant rule-of-law implications, however, deserve mention.
First, on Dec. 6, 2021, DOJ suddenly withdrew a 2019 policy statement on “Licensing Negotiations and Remedies for Standards-Essential Patents Subject to Voluntary F/RAND Commitments.” In so doing, DOJ simultaneously released a new draft policy statement on the same topic, and requested public comments. The timing of the withdrawal was peculiar, since the U.S. Patent and Trademark Office (PTO) and the National Institute of Standards and Technology (NIST)—who had joined with DOJ in the 2019 policy statement (which itself had replaced a 2013 policy statement)—did not yet have new Senate-confirmed leadership and were apparently not involved in the withdrawal. What’s more, DOJ originally requested that public comments be filed by the beginning of January, a ridiculously short amount of time for such a complex topic. (It later relented and established an early February deadline.) More serious than these procedural irregularities, however, are two new features of the Draft Policy Statement: (1) its delineation of a suggested private-negotiation framework for patent licensing; and (2) its assertion that standard essential patent (SEP) holders essentially forfeit the right to seek an injunction. These provisions, though not binding, may have a coercive effect on some private negotiators, and they problematically insert the government into matters that are appropriately the province of private businesses and the courts. Such an involvement by government enforcers in private negotiations, which treats one category of patents (SEPs) less favorably than others, raises rule-of-law questions.
Second, in January 2018, DOJ and the FTC jointly issued a “Request for Information on Merger Enforcement” [RIF] that contemplated the issuance of new merger guidelines (see my recent analysis, here). The RIF was chock full of numerous queries to prospective commentators that generally reflected a merger-skeptical tone. This suggests a predisposition to challenge mergers that, if embodied in guidelines language, could discourage some (or perhaps many) non-problematic consolidations from being proposed. New merger guidelines that impliedly were anti-merger would be a departure from previous guidelines, which stated in neutral fashion that they would consider both the anticompetitive risks and procompetitive benefits of mergers being reviewed. A second major concern is that the enforcement agencies might produce long and detailed guidelines containing all or most of the many theories of competitive harm found in the RIF. Overly complex guidelines would not produce any true guidance to private parties, inconsistent with the principle that individuals should be informed what the law is. Such guidelines also would give enforcers greater flexibility to selectively pick and choose theories best suited to block particular mergers. As such, the guidelines might be viewed by judges as justifications for arbitrary, rather than principled, enforcement, at odds with the rule of law.
It is to be hoped that the FTC and DOJ will take into account this international dimension in assessing the merits of antitrust “reforms” now under consideration. New enforcement policies that sow delay and uncertainty undermine the rule of law and are inconsistent with due-process principles. The consumer welfare harm that may flow from such deficient policies may be substantial. The agency missteps identified above should be rectified and new polices that would weaken due-process protections and undermine the rule of law should be avoided.
President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to further demonstrate a serious intent to pursue these objectives.
Yet several actions taken by federal agencies threaten to undermine the intellectual-property rights and transactional structures that have driven the exceptional performance of U.S. firms in key areas of the global innovation economy. These regulatory missteps together represent a policy “lose-lose” that lacks any sound basis in innovation economics and threatens U.S. leadership in mission-critical technology sectors.
Life Sciences: USTR Campaigns Against Intellectual-Property Rights
In the pharmaceutical sector, the administration’s signature action has been an unprecedented campaign by the Office of the U.S. Trade Representative (USTR) to block enforcement of patents and other intellectual-property rights held by companies that have broken records in the speed with which they developed and manufactured COVID-19 vaccines on a mass scale.
Patents were not an impediment in this process. To the contrary: they were necessary predicates to induce venture-capital investment in a small firm like BioNTech, which undertook drug development and then partnered with the much larger Pfizer to execute testing, production, and distribution. If success in vaccine development is rewarded with expropriation, this vital public-health sector is unlikely to attract investors in the future.
Contrary to increasingly common assertions that the Bayh-Dole Act (which enables universities to seek patents arising from research funded by the federal government) “robs” taxpayers of intellectual property they funded, the development of Covid-19 vaccines by scientist-founded firms illustrates how the combination of patents and private capital is essential to convert academic research into life-saving medical solutions. The biotech ecosystem has long relied on patents to structure partnerships among universities, startups, and large firms. The costly path from lab to market relies on a secure property-rights infrastructure to ensure exclusivity, without which no investor would put capital at stake in what is already a high-risk, high-cost enterprise.
This is not mere speculation. During the decades prior to the Bayh-Dole Act, the federal government placed strict limitations on the ability to patent or exclusively license innovations arising from federally funded research projects. The result: the market showed little interest in making the investment needed to convert those innovations into commercially viable products that might benefit consumers. This history casts great doubt on the wisdom of the USTR’s campaign to limit the ability of biopharmaceutical firms to maintain legal exclusivity over certain life sciences innovations.
Genomics: FTC Attempts to Block the Illumina/GRAIL Acquisition
In the genomics industry, the Federal Trade Commission (FTC) has devoted extensive resources to oppose the acquisition by Illumina—the market leader in next-generation DNA-sequencing equipment—of a medical-diagnostics startup, GRAIL (an Illumina spinoff), that has developed an early-stage cancer screening test.
It is hard to see the competitive threat. GRAIL is a pre-revenue company that operates in a novel market segment and its diagnostic test has not yet received approval from the Food and Drug Administration (FDA). To address concerns over barriers to potential competitors in this nascent market, Illumina has committed to 12-year supply contracts that would bar price increases or differential treatment for firms that develop oncology-detection tests requiring use of the Illumina platform.
The FTC’s case against Illumina’s re-acquisition of GRAIL relies on theoretical predictions of consumer harm in a market that is not yet operational. Hypothetical market failure scenarios may suit an academic seminar but fall well below the probative threshold for antitrust intervention.
Most critically, the Illumina enforcement action places at-risk a key element of well-functioning innovation ecosystems. Economies of scale and network effects lead technology markets to converge on a handful of leading platforms, which then often outsource research and development by funding and sometimes acquiring smaller firms that develop complementary technologies. This symbiotic relationship encourages entry and benefits consumers by bringing new products to market as efficiently as possible.
If antitrust interventions based on regulatory fiat, rather than empirical analysis, disrupt settled expectations in the M&A market that innovations can be monetized through acquisition transactions by larger firms, venture capital may be unwilling to fund such startups in the first place. Independent development or an initial public offering are often not feasible exit options. It is likely that innovation will then retreat to the confines of large incumbents that can fund research internally but often execute it less effectively.
Wireless Communications: DOJ Takes Aim at Standard-Essential Patents
Wireless communications stand at the heart of the global transition to a 5G-enabled “Internet of Things” that will transform business models and unlock efficiencies in myriad industries. It is therefore of paramount importance that policy actions in this sector rest on a rigorous economic basis. Unfortunately, a recent policy shift proposed by the U.S. Department of Justice’s (DOJ) Antitrust Division does not meet this standard.
In December 2021, the Antitrust Division released a draft policy statement that would largely bar owners of standard-essential patents from seeking injunctions against infringers, which are usually large device manufacturers. These patents cover wireless functionalities that enable transformative solutions in myriad industries, ranging from communications to transportation to health care. A handful of U.S. and European firms lead in wireless chip design and rely on patent licensing to disseminate technology to device manufacturers and to fund billions of dollars in research and development. The result is a technology ecosystem that has enjoyed continuous innovation, widespread user adoption, and declining quality-adjusted prices.
Rather than promoting competition or innovation, the proposed policy would simply transfer wealth from firms that develop new technologies at great cost and risk to firms that prefer to use those technologies at no cost at all. This does not benefit anyone other than device manufacturers that already capture the largest portion of economic value in the smartphone supply chain.
From international trade to antitrust to patent policy, the administration’s actions imply little appreciation for the property rights and contractual infrastructure that support real-world innovation markets. In particular, the administration’s policies endanger the intellectual-property rights and monetization pathways that support market incentives to invest in the development and commercialization of transformative technologies.
This creates an inviting vacuum for strategic rivals that are vigorously pursuing leadership positions in global technology markets. In industries that stand at the heart of the knowledge economy—life sciences, genomics, and wireless communications—the administration is on a counterproductive trajectory that overlooks the business realities of technology markets and threatens to push capital away from the entrepreneurs that drive a robust innovation ecosystem. It is time to reverse course.
The Federal Trade Commission (FTC) on Dec. 2 filed an administrative complaint to block the vertical merger between Nvidia Corp., a graphics chip supplier, and Arm Ltd., a computing-processor designer. The press release accompanying the complaint stresses the allegation that “the combined firm would have the means and incentive to stifle innovative next-generation technologies, including those used to run datacenters and driver-assistance systems in cars.” According to the FTC:
Because Arm’s technology is a critical input that enables competition between Nvidia and its competitors in several markets, the complaint alleges that the proposed merger would give Nvidia the ability and incentive to use its control of this technology to undermine its competitors, reducing competition and ultimately resulting in reduced product quality, reduced innovation, higher prices, and less choice, harming the millions of Americans who benefit from Arm-based products[.]
Assuming the merger proposal is not dropped (it also faces tough sledding in the European Union and the United Kingdom), findings of fact developed at the FTC administrative trial scheduled to begin next August will shed light on the robustness of the complaint’s allegations. Without waiting that long, however, and without commenting on the FTC’s theory of competitive harm, it is useful to take stock of the substantial efficiencies that may be associated with the merger, which can be gleaned from the public record. (The following discussion draws primarily on four sources, see here, here, here, and here.)
The Proposed Merger and Its Efficiencies
Arm has been a key player in the development of next generation processors for the better of the last 30 years. Arm-based processors can be found in most mobile devices, from mobile phones and tablets to some computers. Their ubiquity stems from their power, efficiency, high speed, and low cost. Part of this low cost comes from Arm’s licensing scheme, whereby Arm itself does not produce or deliver any semiconductors. Rather, it licenses their intellectual property to other businesses, allowing those businesses great freedom of implementation in return for zero manufacturing risk for Arm. This means that neither consumers nor businesses can buy an Arm processor to put into their computers, and there is no such thing as an Arm-branded processor. Companies use Arm’s technology to develop, refine, and manufacture their own processors.
Nvidia, also a long-time player in the microprocessor space, takes a decidedly different approach to the semiconductor market, manufacturing and selling its devices both to end users and business alike. Nvidia graphics cards (GPUs) are integrated into various computing machines, from consumer laptops to data-center servers, and all carry Nvidia branding. This approach places significantly greater manufacturing risk on Nvidia but allows for significantly greater control over the integration and operation of their products. Since Nvidia undertakes development of optimization and compatibility in-house, it can ensure that its GPU technology works similarly across devices, a step that Arm does not take.
Additionally, there are two ways in which outside companies may interact with Arm’s IP. The first involves buying the right to produce a stock processor and modifying it to suit the business’ needs. This is the less expensive option and allows businesses to undertake the bare minimum of research and development to make their product work. Arm supplies them with the specifications to manufacture the processor, but the optimization and compatibility testing is the responsibility of the end business.
The second avenue is by purchasing what is known as an “architectural license,” giving the business rights to the underlying processor technology and coding language, but not a processor design. In those cases, the end business designs a processor from scratch, optimizing and integrating as it goes along to make sure the processor is a perfect fit for its device. While this integration generally leads to better results for the consumer, this method requires significantly higher research and development costs, leading to higher prices for the device.
The second avenue enables businesses to significantly advance the capabilities of their processors beyond what is achievable though an Arm-specific design. Since Arm generally focuses on CPU technology, the integration of the additional components to make the computer work—like the motherboard, hard drives, and GPU—are left to the business. In many cases, these components are pieced together from various sources and may be poorly integrated, leading to lower-powered machines with inferior battery life.
However, businesses like Apple and Samsung have taken advantage of architectural licenses to use Arm processor technology and fully integrate all necessary components to work together seamlessly. This can improve battery life, speed, and efficiency in ways that off-the-shelf components are not capable of achieving. This fully integrated system, called a system on chip (SoC) design, advances computing far beyond Arm’s current offerings and presents a significant competitive threat to the processor market.
Given these circumstances, vertical integration of Arm with Nvidia may present both significant efficiencies and new competition in processor markets. Nvidia, with its expertise in manufacturing and designing integrated systems, may benefit from bringing Arm’s processor design in-house. It could save on licensing costs and use the extra capital to bring fully integrated off-the-shelf SoC designs to the mass market. This could reduce the cost of SoC implementation for computer manufacturers, reduce the time spent designing new computers, and bring the price of computers and mobile devices down for consumers.
Additionally, integration with Nvidia would allow Arm to keep pace with the wave of innovation from Apple and Samsung, among others. Those companies are making significant strides in the mobile-computing market, designing smaller, faster, and more energy-efficient processors that can be put into just about any form factor. Arm is significantly behind the curve when looking toward the next generation of processing technology. Integrating with Nvidia may be what the company needs to become competitive in the years to come.
One argument against allowing the merger to be completed is that Arm is a critical trading partner with nearly every processor manufacturer in the market, including Nvidia. Up to this point, Arm has not been owned by a single manufacturer and has not had an incentive to prioritize working with one manufacturer over another. Should the merger go through, Nvidia would own Arm, including the IP used by other companies, leading to concern from the FTC and other international regulators that Nvidia will be able to foreclose rivals from critical IP.
There is a strong counterargument, however, that Nvidia would be going against its own interest if it seeks to foreclose the market. Arm-based processors have become a dominant processor technology in recent years, integrated into 90% of the mobile-device market and nearly 34% of the overall computing market. This guaranteed revenue stream is a gold mine for the company, amounting to nearly $2 billion annually.
Closing the door to this revenue stream by revoking access to Arm’s IP would surely come back to bite the newly merged company. Foreclosing IP would have the effect of raising prices and reducing the quantity of processors in the market, but also would likely force the market to shift away from Arm-based processers over time. Arm already has been forced to reduce the cost to license its technology in recent years in order to stave off competition from open-source chip designs that are available without a license. Doing anything that impacted the overall computing market would harm consumers, businesses, and the newly merged company alike. Denying IP to the broader market would likely not pass an internal cost-benefit analysis for the merged entity.
We do not express an opinion on the ultimate antitrust merits of the Arm-Nvidia vertical merger. We note, however, that vertical mergers are typically procompetitive. Furthermore, information in the public record about the proposed consolidation strongly suggests that it could generate substantial efficiencies that would enhance competition in markets for next-generation computers and mobile devices, in turn benefiting consumers. FTC theories of merger-related anticompetitive foreclosure (which at first blush appear somewhat counterintuitive) need to be scrutinized carefully in light of specific facts, and should be assessed with a jaundiced eye in light of the powerful efficiency arguments in favor of the Arm-Nvidia merger.
Recent antitrust forays on both sides of the Atlantic have unfortunate echoes of the oldie-but-baddie “efficiencies offense” that once plagued American and European merger analysis (and, more broadly, reflected a “big is bad” theory of antitrust). After a very short overview of the history of merger efficiencies analysis under American and European competition law, we briefly examine two current enforcement matters “on both sides of the pond” that impliedly give rise to such a concern. Those cases may regrettably foreshadow a move by enforcers to downplay the importance of efficiencies, if not openly reject them.
Background: The Grudging Acceptance of Merger Efficiencies
Starting in the 1980s, the promulgation of increasingly economically sophisticated merger guidelines in the United States led to the acceptance of efficiencies (albeit less then perfectly) as an important aspect of integrated merger analysis. Several practitioners have claimed, nevertheless, that “efficiencies are seldom credited and almost never influence the outcome of mergers that are otherwise deemed anticompetitive.” Commissioner Christine Wilson has argued that the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) still have work to do in “establish[ing] clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.”
In short, although the actual weight enforcers accord to efficiency claims is a matter of debate, efficiency justifications are cognizable, subject to constraints, as a matter of U.S. and European Union merger-enforcement policy. Whether that will remain the case is, unfortunately, uncertain, given DOJ and FTC plans to revise merger guidelines, as well as EU talk of convergence with U.S. competition law.
Two Enforcement Matters with ‘Efficiencies Offense’ Overtones
Two Facebook-related matters currently before competition enforcers—one in the United States and one in the United Kingdom—have implications for the possible revival of an antitrust “efficiencies offense” as a “respectable” element of antitrust policy. (I use the term Facebook to reference both the platform company and its corporate parent, Meta.)
FTC v. Facebook
The FTC’s 2020 federal district court monopolization complaint against Facebook, still in the motion to dismiss the amended complaint phase (see here for an overview of the initial complaint and the judge’s dismissal of it), rests substantially on claims that Facebook’s acquisitions of Instagram and WhatsApp harmed competition. As Facebook points out in its recent reply brief supporting its motion to dismiss the FTC’s amended complaint, Facebook appears to be touting merger-related efficiencies in critiquing those acquisitions. Specifically:
[The amended complaint] depends on the allegation that Facebook’s expansion of both Instagram and WhatsApp created a “protective ‘moat’” that made it harder for rivals to compete because Facebook operated these services at “scale” and made them attractive to consumers post-acquisition. . . . The FTC does not allege facts that, left on their own, Instagram and WhatsApp would be less expensive (both are free; Facebook made WhatsApp free); or that output would have been greater (their dramatic expansion at “scale” is the linchpin of the FTC’s “moat” theory); or that the products would be better in any specific way.
The FTC’s concerns about a scale-based merger-related output expansion that benefited consumers and thereby allegedly enhanced Facebook’s market position eerily echoes the commission’s concerns in Procter & Gamble that merger-related cost-reducing joint efficiencies in advertising had an anticompetitive “entrenchment” effect. Both positions, in essence, characterize output-increasing efficiencies as harmful to competition: in other words, as “efficiencies offenses.”
UK Competition and Markets Authority (CMA) v. Facebook
The CMA announced Dec. 1 that it had decided to block retrospectively Facebook’s 2020 acquisition of Giphy, which is “a company that provides social media and messaging platforms with animated GIF images that users can embed in posts and messages. . . . These platforms license the use of Giphy for its users.”
The CMA theorized that Facebook could harm competition by (1) restricting access to Giphy’s digital libraries to Facebook’s competitors; and (2) prevent Giphy from developing into a potential competitor to Facebook’s display advertising business.
As a CapX analysis explains, the CMA’s theory of harm to competition, based on theoretical speculation, is problematic. First, a behavioral remedy short of divestiture, such as requiring Facebook to maintain open access to its gif libraries, would deal with the threat of restricted access. Indeed, Facebook promised at the time of the acquisition that Giphy would maintain its library and make it widely available. Second, “loss of a single, relatively small, potential competitor out of many cannot be counted as a significant loss for competition, since so many other potential and actual competitors remain.” Third, given the purely theoretical and questionable danger to future competition, the CMA “has blocked this deal on relatively speculative potential competition grounds.”
Apart from the weakness of the CMA’s case for harm to competition, the CMA appears to ignore a substantial potential dynamic integrative efficiency flowing from Facebook’s acquisition of Giphy. As David Teece explains:
Facebook’s acquisition of Giphy maintained Giphy’s assets and furthered its innovation in Facebook’s ecosystem, strengthening that ecosystem in competition with others; and via Giphy’s APIs, strengthening the ecosystems of other service providers as well.
There is no evidence that CMA seriously took account of this integrative efficiency, which benefits consumers by offering them a richer experience from Facebook and its subsidiary Instagram, and which spurs competing ecosystems to enhance their offerings to consumers as well. This is a failure to properly account for an efficiency. Moreover, to the extent that the CMA viewed these integrative benefits as somehow anticompetitive (to the extent that it enhanced Facebook’s competitive position) the improvement of Facebook’s ecosystem could have been deemed a type of “efficiencies offense.”
Are the Facebook Cases Merely Random Straws in the Wind?
It might appear at first blush to be reading too much into the apparent slighting of efficiencies in the two current Facebook cases. Nevertheless, recent policy rhetoric suggests that economic efficiencies arguments (whose status was tenuous at enforcement agencies to begin with) may actually be viewed as “offensive” by the new breed of enforcers.
In her Sept. 22 policy statement on “Vision and Priorities for the FTC,” Chair Lina Khan advocated focusing on the possible competitive harm flowing from actions of “gatekeepers and dominant middlemen,” and from “one-sided [vertical] contract provisions” that are “imposed by dominant firms.” No suggestion can be found in the statement that such vertical relationships often confer substantial benefits on consumers. This hints at a new campaign by the FTC against vertical restraints (as opposed to an emphasis on clearly welfare-inimical conduct) that could discourage a wide range of efficiency-producing contracts.
Chair Khan also sponsored the FTC’s July 2021 rescission of its Section 5 Policy Statement on Unfair Methods of Competition, which had emphasized the primacy of consumer welfare as the guiding principle underlying FTC antitrust enforcement. A willingness to set aside (or place a lower priority on) consumer welfare considerations suggests a readiness to ignore efficiency justifications that benefit consumers.
The statement by the FTC majority . . . notes that the 2020 Vertical Merger Guidelines had improperly contravened the Clayton Act’s language with its approach to efficiencies, which are not recognized by the statute as a defense to an unlawful merger. The majority statement explains that the guidelines adopted a particularly flawed economic theory regarding purported pro-competitive benefits of mergers, despite having no basis of support in the law or market reality.
Also noteworthy is Khan’s seeming interest (found in her writings here, here, and here) in reviving Robinson-Patman Act enforcement. What’s worse, President Joe Biden’s July 2021 Executive Order on Competition explicitly endorses FTC investigation of “retailers’ practices on the conditions of competition in the food industries, including any practices that may violate [the] Robinson-Patman Act” (emphasis added). Those troubling statements from the administration ignore the widespread scholarly disdain for Robinson-Patman, which is almost unanimously viewed as an attack on efficiencies in distribution. For example, in recommending the act’s repeal in 2007, the congressionally established Antitrust Modernization Commission stressed that the act “protects competitors against competition and punishes the very price discounting and innovation and distribution methods that the antitrust otherwise encourage.”
Recent straws in the wind suggest that an anti-efficiencies hay pile is in the works. Although antitrust agencies have not yet officially rejected the consideration of efficiencies, nor endorsed an “efficiencies offense,” the signs are troubling. Newly minted agency leaders’ skepticism toward antitrust economics, combined with their de-emphasis of the consumer welfare standard and efficiencies (at least in the merger context), suggest that even strongly grounded efficiency explanations may be summarily rejected at the agency level. In foreign jurisdictions, where efficiencies are even less well-established, and enforcement based on mere theory (as opposed to empiricism) is more widely accepted, the outlook for efficiencies stories appears to be no better.
One powerful factor, however, should continue to constrain the anti-efficiencies movement, at least in the United States: the federal courts. As demonstrated most recently in the 9th U.S. Circuit Court of Appeals’ FTC v. Qualcomm decision, American courts remain committed to insisting on empirical support for theories of harm and on seriously considering business justifications for allegedly suspect contractual provisions. (The role of foreign courts in curbing prosecutorial excesses not grounded in economics, and in weighing efficiencies, depends upon the jurisdiction, but in general such courts are far less of a constraint on enforcers than American tribunals.)
While the DOJ and FTC (and, perhaps to a lesser extent, foreign enforcers) will have to keep the judiciary in mind in deciding to bring enforcement actions, the denigration of efficiencies by the agencies still will have an unfortunate demonstration effect on the private sector. Given the cost (both in resources and in reputational capital) associated with antitrust investigations, and the inevitable discounting for the risk of projects caught up in such inquiries, a publicly proclaimed anti-efficiencies enforcement philosophy will do damage. On the margin, it will lead businesses to introduce fewer efficiency-seeking improvements that could be (wrongly) characterized as “strengthening” or “entrenching” market dominance. Such business decisions, in turn, will be welfare-inimical; they will deny consumers the benefit of efficiencies-driven product and service enhancements, and slow the rate of business innovation.
As such, it is to be hoped that, upon further reflection, U.S. and foreign competition enforcers will see the light and publicly proclaim that they will fully weigh efficiencies in analyzing business conduct. The “efficiencies offense” was a lousy tune. That “oldie-but-baddie” should not be replayed.
[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]
Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.
Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed.,strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.
I will highlight five areas of concern with the DMU proposal.
I. Chilling Effects
The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.
Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.
II. SMS Designations Should Not Apply to the Whole Firm
The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.
Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.
Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.
III. The DMU Cannot Be Evidence-based Given its Goals and Objectives
The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.” DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency. Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.
Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.
Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.
Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.
IV. Speed of Proposals
Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.
As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.
But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.
The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar. Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.
V. Antitrust Is Not Always the Answer
Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.
Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.
There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.
Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.
In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.
In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.
These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).
Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.
However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.
The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.
For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.
Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.
The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.
One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.
The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.
But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.
Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.
Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:
[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.
However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.
Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.
Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.
In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”
Mergers and Potential Competition
Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.
Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.
However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.
Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.
Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell. Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.
Second, potential competition does not always increase consumer welfare. Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.
For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.
There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.
In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.
Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.
Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.
If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.
This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.
As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.
Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”
Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:
[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]
[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.
From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.
To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.
Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.
Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?
The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.
Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.
And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:
For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.
One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:
The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.
Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.
In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.
This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.
In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.
While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.
A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.
This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.
The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.
To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.
The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.
Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.
Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.
The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.
Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.
The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.
In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.
Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.
The language of the federal antitrust laws is extremely general. Over more than a century, the federal courts have applied common-law techniques to construe this general language to provide guidance to the private sector as to what does or does not run afoul of the law. The interpretive process has been fraught with some uncertainty, as judicial approaches to antitrust analysis have changed several times over the past century. Nevertheless, until very recently, judges and enforcers had converged toward relying on a consumer welfare standard as the touchstone for antitrust evaluations (see my antitrust primer here, for an overview).
While imperfect and subject to potential error in application—a problem of legal interpretation generally—the consumer welfare principle has worked rather well as the focus both for antitrust-enforcement guidance and judicial decision-making. The general stability and predictability of antitrust under a consumer welfare framework has advanced the rule of law. It has given businesses sufficient information to plan transactions in a manner likely to avoid antitrust liability. It thereby has cabined uncertainty and increased the probability that private parties would enter welfare-enhancing commercial arrangements, to the benefit of society.
In a very thoughtful 2017 speech, then Acting Assistant Attorney General for Antitrust Andrew Finch commented on the importance of the rule of law to principled antitrust enforcement. He noted:
[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.
Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.
The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.
Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.
That is certainly not to say there has not been positive change in the antitrust laws in the past, or that we would have been better off without those changes. U.S. antitrust law has been refined, and occasionally recalibrated, with the courts playing their appropriate interpretive role. And enforcers must always be on the watch for new or evolving threats to competition. As markets evolve and products develop over time, our analysis adapts. But as those changes occur, we pursue reliability and consistency in application in the antitrust laws as much as possible.
Indeed, we have enjoyed remarkable continuity and consensus for many years. Antitrust law in the U.S. has not been a “paradox” for quite some time, but rather a stable and valuable law enforcement regime with appropriately widespread support.
Unfortunately, policy decisions taken by the new Federal Trade Commission (FTC) leadership in recent weeks have rejected antitrust continuity and consensus. They have injected substantial uncertainty into the application of competition-law enforcement by the FTC. This abrupt change in emphasis undermines the rule of law and threatens to reduce economic welfare.
As of now, the FTC’s departure from the rule of law has been notable in two areas:
Its rejection of previous guidance on the agency’s “unfair methods of competition” authority, the FTC’s primary non-merger-related enforcement tool; and
Its new advice rejecting time limits for the review of generally routine proposed mergers.
In addition, potential FTC rulemakings directed at “unfair methods of competition” would, if pursued, prove highly problematic.
Rescission of the Unfair Methods of Competition Policy Statement
The bipartisan UMC Policy Statement has originally been supported by all three Democratic commissioners, including then-Chairwoman Edith Ramirez. The policy statement generally respected and promoted the rule of law by emphasizing that, in applying the facially broad “unfair methods of competition” (UMC) language, the FTC would be guided by the well-established principles of the antitrust rule of reason (including considering any associated cognizable efficiencies and business justifications) and the consumer welfare standard. The FTC also explained that it would not apply “standalone” Section 5 theories to conduct that would violate the Sherman or Clayton Acts.
In short, the UMC Policy Statement sent a strong signal that the commission would apply UMC in a manner fully consistent with accepted and well-understood antitrust policy principles. As in the past, the vast bulk of FTC Section 5 prosecutions would be brought against conduct that violated the core antitrust laws. Standalone Section 5 cases would be directed solely at those few practices that harmed consumer welfare and competition, but somehow fell into a narrow crack in the basic antitrust statutes (such as, perhaps, “invitations to collude” that lack plausible efficiency justifications). Although the UMC Statement did not answer all questions regarding what specific practices would justify standalone UMC challenges, it substantially limited business uncertainty by bringing Section 5 within the boundaries of settled antitrust doctrine.
The FTC’s announcement of the UMC Policy Statement rescission unhelpfully proclaimed that “the time is right for the Commission to rethink its approach and to recommit to its mandate to police unfair methods of competition even if they are outside the ambit of the Sherman or Clayton Acts.” As a dissenting statement by Commissioner Christine S. Wilson warned, consumers would be harmed by the commission’s decision to prioritize other unnamed interests. And as Commissioner Noah Joshua Phillips stressed in his dissent, the end result would be reduced guidance and greater uncertainty.
In sum, by suddenly leaving private parties in the dark as to how to conform themselves to Section 5’s UMC requirements, the FTC’s rescission offends the rule of law.
New Guidance to Parties Considering Mergers
For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:
The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.
Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.
Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.
Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”
The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.
[T]he FTC may challenge deals that “threaten to reduce competition and harm consumers, workers, and honest businesses.” Adding in harm to both “workers and honest businesses” implies that the FTC may be considering more ways that transactions can have an adverse impact other than just harm to competition and consumers [citation omitted].
Because consensus antitrust merger analysis centers on consumer welfare, not the protection of labor or business interests, any suggestion that the FTC may be extending its reach to these new areas is inconsistent with established legal principles and generates new business-planning risks.
More generally, the Aug. 6 FTC “blog post could be viewed as an attempt to modify the temporal framework of the HSR Act”—in effect, an effort to displace an implicit statutory understanding in favor of an agency diktat, contrary to the rule of law. Commissioner Wilson sees the blog post as a means to keep investigations open indefinitely and, thus, an attack on the decades-old HSR framework for handling most merger reviews in an expeditious fashion (see here). Commissioner Phillips is concerned about an attempt to chill legal M&A transactions across the board, particularly unfortunate when there is no reason to conclude that particular transactions are illegal (see here).
Finally, the historical record raises serious questions about the “resource constraint” justification for the FTC’s new merger review policies:
Through the end of July 2021, more than 2,900 transactions were reported to the FTC. It is not clear, however, whether these record-breaking HSR filing numbers have led (or will lead) to more deals being investigated. Historically, only about 13 percent of all deals reported are investigated in some fashion, and roughly 3 percent of all deals reported receive a more thorough, substantive review through the issuance of a Second Request. Even if more deals are being reported, for the majority of transactions, the HSR process is purely administrative, raising no antitrust concerns, and, theoretically, uses few, if any, agency resources. [Citations omitted.]
Proposed FTC Competition Rulemakings
The new FTC leadership is strongly considering competition rulemakings. As I explained in a recent Truth on the Market post, such rulemakings would fail a cost-benefit test. They raise serious legal risks for the commission and could impose wasted resource costs on the FTC and on private parties. More significantly, they would raise two very serious economic policy concerns:
First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.
Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules. [Emphasis added.]
In short, common law antitrust adjudication, focused on the consumer welfare standard, has done a good job of promoting a vibrant competitive economy in an efficient fashion. FTC competition rulemaking would not.
Recent FTC actions have undermined consensus antitrust-enforcement standards and have departed from established merger-review procedures with respect to seemingly uncontroversial consolidations. Those decisions have imposed costly uncertainty on the business sector and are thereby likely to disincentivize efficiency-seeking arrangements. What’s more, by implicitly rejecting consensus antitrust principles, they denigrate the primacy of the rule of law in antitrust enforcement. The FTC’s pursuit of competition rulemaking would further damage the rule of law by imposing arbitrary strictures that ignore matter-specific considerations bearing on the justifications for particular business decisions.
Fortunately, these are early days in the Biden administration. The problematic initial policy decisions delineated in this comment could be reversed based on further reflection and deliberation within the commission. Chairwoman Lina Khan and her fellow Democratic commissioners would benefit by consulting more closely with Commissioners Wilson and Phillips to reach agreement on substantive and procedural enforcement policies that are better tailored to promote consumer welfare and enhance vibrant competition. Such policies would benefit the U.S. economy in a manner consistent with the rule of law.
This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges.]
A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.
Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.
But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.
The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.
The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.
Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).
Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.
The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.
Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules? In antitrust, vagueness in critical statutory terms allows room for change. The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.
The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.
Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.
Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?
Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).
There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.
Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.
Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”
Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.
Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.
 But some threshold conditions like agreement or dominance might also become dated.
It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.
But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)
Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.
An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.
In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:
Deals effectively with serious competitive problems; while at the same time
Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.
Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.
Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.