Archives For Mergers & Acquisitions

Early August is an unpredictable time in the policy world. With Congress about to go on recess, one never knows if there will be a mad rush to get something done, or what that something may be. And it is, for many, a month of vacations and light schedules. Short staffing may delay work or allow mistakes to be made. And then there’s Alex Jones’s lawyer – for whom the best that can be said is that he will forever be known as “Alex Jones’s lawyer” than by his given name. The Roundup this week is brought to you by the letter unpredictability.

This week’s headline is antitrust labor issues. The week started off with news that a senior Republican staffer is leaving the Senate Judiciary Committee – a staffer who has reportedly been instrumental in drafting the American Innovation and Choice Online Act (AICOA) – to join Amazon as a lobbyist. As Politico suggests, this “move is particularly notable because the legislation he was working on – [AICOA] – is losing steam.” More on that in a moment. 

The next bit of antitrust labor news is word of an FTC Inspector General report stemming from an audit of the FTC’s use of unpaid consultants and experts. As reported by Leah Nylen, the OIG report found that this practice, used in prior administrations by expanded substantially under current FTC Chair Lina Khan, “creat[es] potential legal and compliance risks, including conflicts of interest.” The report expressly notes that the “audit was not designed to determine whether unpaid consultant or experts were involved in activities prohibited by the federal policies … and [makes] no assertions on their involvement in those activities.” It then goes on to lay out various activities they were involved in that clearly violate federal policies. Oh my.

The big antitrust labor news of the week is, of course, that of Tim Wu’s quantum departure from his role as White House central competition czar. The story of his pending return to the ivory tower broke on Tuesday and spread fast to all corners. The next morning, the man himself reported that those reports were “greatly exaggerated.” He has not, however, said whether this means he’s sticking around in his current role for days, weeks, or months – though it bears note that the original report was merely that he would be returning to his teaching position “in the coming months.” One wonders whether his suggestion that he is not leaving is itself a great exaggeration.

Uncertainty over the fate of Wu evokes uncertainty over the state of AICOA – indeed, their fates could be intimately linked. Last week, around the time Wu might have made the decision to leave, it would have seemed AICOA was losing steam. With the Inflation Reduction Act taking all the Big Bill energy during the mad-dash to the August recess, even Senator Klobuchar (D-MN) was forced to admit that AICOA would not get a vote before the recess. Then came the report that Klobuchar has offered to amend the bill to address the concerns that Senator Brian Schatz (D-HI) and other democratic senators have that AICOA could limit platforms’ content moderation practices. (Side note: Ashley Gold is simply killing this beat this week.) This is a remarkable change in stance for Klobuchar, who has steadfastly refused to consider such an amendment – almost certainly because she knows it will cost needed Republican support for the bill. One wonders how many Republicans will be one board after such an amendment is made.

Turning the page, the next day Politico reported that Senate Majority Leader Schumer (D-NY) plans, but also may not plan, to bring AICOA to the floor after the recess. His plans are either more or less clear than Tim Wu’s plans to leave his position. It seems likely that Schumer is supporting Klobuchar’s efforts to get votes for the bill, but his support for bringing it to the floor may yet be contingent. The Politico report suggests that Schumer’s office may have backed off from saying he plans to bring it to the floor – and may even pressured prior reports to remove a statement that he would bring it to the floor.

So what’s going on with AICOA? I stand by my prior assessment that it’s dead. Actually, I think that it’s now worse than dead – it’s now a mere political football. Senator Manchin’s flip on Build Back Better has soured the likelihood of any bipartisan bills moving forward. The fact that Klobuchar is buckling to Schatz’s demand to address the bill’s threat to content moderation – the only thing that really excited Republicans about the bill – suggests that the current maneuver is to put forth a partisan Big Tech bill that will not pass but that may win some voters’ hearts in November. 

This week’s FTC UMC Roundup ends with an FTC UMC question: Where’s the UMC in the Meta-Within challenge? Last week’s complaint alleges vanilla violations of Section 7 of the Clayton Act. While it mentions the FTC’s Section 5 authority, it does so in boilerplate language. The substantive bases alleged to satisfy the agency’s Section 13(b) burden to get an preliminary injunction against the merger all sound in the traditional language of mergers and Section 7 – that the effect of the merger “may be substantially to lessen competition, or tend to create a monopoly.”

This is interesting for a few reasons. Most notably, as many have noted (Ashley Gold again), the case is a real dog under traditional antitrust law. It’s hard to imagine the FTC not losing – likely at the PI stage and even moreso at trial. If the case is so weak under traditional antitrust law, why not argue this case under non-traditional antitrust law? FTC’s UMC authority is recognized to be broader than traditional antitrust law, precisely to enable the Commission to take action against anticompetitive conduct that falls outside the scope of traditional antitrust law.

Indeed, one of Chair Khan’s stated goals has been to explore the boundaries of the Commission’s UMC authority and to use it to reinvigorate antitrust enforcement. The expectation has been that this would come through the agency’s rulemaking authority – but the agency can develop new law through litigation just as much as through rulemaking. There is even a case, post-West Virginia v. EPA, that the case-by-case approach to expanding its UMC authority is a more viable path forward than to risk raising major questions through a rulemaking.

One wonders what the FTC’s calculation here is. It could simply be a case of boilerplate drafting. Perhaps there was some greater fight within the agency over how to draft the complaint. We know there was dissent from the staff over whether to bring the complaint at all – perhaps this left little time or energy to draft anything more than a standard complaint. Or, perhaps more cynically, the winning move is to lose on traditional antitrust grounds – and to use that as an example to demonstrate the FTC’s need to use its UMC authority in cases such as these.

The FTC UMC Roundup, part of the Truth on the Market FTC UMC Symposium, is a weekly roundup of news relating to the Federal Trade Commission’s antitrust and Unfair Methods of Competition authority. If you would like to receive this and other posts relating to these topics, subscribe to the RSS feed here. If you have news items you would like to suggest for inclusion, please mail them to us at ghurwitz@laweconcenter.org and/or kfierro@laweconcenter.org.

Slow wage growth and rising inequality over the past few decades have pushed economists more and more toward the study of monopsony power—particularly firms’ monopsony power over workers. Antitrust policy has taken notice. For example, when the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) initiated the process of updating their merger guidelines, their request for information included questions about how they should respond to monopsony concerns, as distinct from monopoly concerns. ​

From a pure economic-theory perspective, there is no important distinction between monopsony power and monopoly power. If Armen is trading his apples in exchange for Ben’s bananas, we can call Armen the seller of apples or the buyer of bananas. The labels (buyer and seller) are kind of arbitrary. It doesn’t matter as a pure theory matter. Monopsony and monopoly are just mirrored images.

Some infer from this monopoly-monopsony symmetry, however, that extending antitrust to monopsony power will be straightforward. As a practical matter for antitrust enforcement, it becomes less clear. The moment we go slightly less abstract and use the basic models that economists use, monopsony is not simply the mirror image of monopoly. The tools that antitrust economists use to identify market power differ in the two cases.

Monopsony Requires Studying Output

Suppose that the FTC and DOJ are considering a proposed merger. For simplicity, they know that the merger will generate efficiency gains (and they want to allow it) or market power (and they want to stop it) but not both. The challenge is to look at readily available data like prices and quantities to decide which it is. (Let’s ignore the ideal case that involves being able to estimate elasticities of demand and supply.)

In a monopoly case, if there are efficiency gains from a merger, the standard model has a clear prediction: the quantity sold in the output market will increase. An economist at the FTC or DOJ with sufficient data will be able to see (or estimate) the efficiencies directly in the output market. Efficiency gains result in either greater output at lower unit cost or else product-quality improvements that increase consumer demand. Since the merger lowers prices for consumers, the agencies (assume they care about the consumer welfare standard) will let the merger go through, since consumers are better off.

In contrast, if the merger simply enhances monopoly power without efficiency gains, the quantity sold will decrease, either because the merging parties raise prices or because quality declines. Again, the empirical implication of the merger is seen directly in the market in question. Since the merger raises prices for consumers, the agencies (assume they care about the consumer welfare standard) will let not the merger go through, since consumers are worse off. In both cases, you judge monopoly power by looking directly at the market that may or may not have monopoly power.

Unfortunately, the monopsony case is more complicated. Ultimately, we can be certain of the effects of monopsony only by looking at the output market, not the input market where the monopsony power is claimed.

To see why, consider again a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. An overly eager FTC may see a lower quantity of input purchased and jump to the conclusion that the merger increased monopsony power. After all, monopsonies purchase fewer inputs than competitive firms.

Not so fast. Fewer input purchases may be because of efficiency gains. For example, if the efficiency gain arises from the elimination of redundancies in a hospital merger, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies. This may even reduce the wages of technicians or the price of medical supplies, even if the newly merged hospitals are not exercising any market power to suppress wages.

The key point is that monopsony needs to be treated differently than monopoly. The antitrust agencies cannot simply look at the quantity of inputs purchased in the monopsony case as the flip side of the quantity sold in the monopoly case, because the efficiency-enhancing merger can look like the monopsony merger in terms of the level of inputs purchased.

How can the agencies differentiate efficiency-enhancing mergers from monopsony mergers? The easiest way may be for the agencies to look at the output market: an entirely different market than the one with the possibility of market power. Once we look at the output market, as we would do in a monopoly case, we have clear predictions. If the merger is efficiency-enhancing, there will be an increase in the output-market quantity. If the merger increases monopsony power, the firm perceives its marginal cost as higher than before the merger and will reduce output. 

In short, as we look for how to apply antitrust to monopsony-power cases, the agencies and courts cannot look to the input market to differentiate them from efficiency-enhancing mergers; they must look at the output market. It is impossible to discuss monopsony power coherently without considering the output market.

In real-world cases, mergers will not necessarily be either strictly efficiency-enhancing or strictly monopsony-generating, but a blend of the two. Any rigorous consideration of merger effects must account for both and make some tradeoff between them. The question of how guidelines should address monopsony power is inextricably tied to the consideration of merger efficiencies, particularly given the point above that identifying and evaluating monopsony power will often depend on its effects in downstream markets.

This is just one complication that arises when we move from the purest of pure theory to slightly more applied models of monopoly and monopsony power. Geoffrey Manne, Dirk Auer, Eric Fruits, Lazar Radic and I go through more of the complications in our comments summited to the FTC and DOJ on updating the merger guidelines.

What Assumptions Make the Difference Between Monopoly and Monopsony?

Now that we have shown that monopsony and monopoly are different, how do we square this with the initial observation that it was arbitrary whether we say Armen has monopsony power over apples or monopoly power over bananas?

There are two differences between the standard monopoly and monopsony models. First, in a vast majority of models of monopsony power, the agent with the monopsony power is buying goods only to use them in production. They have a “derived demand” for some factors of production. That demand ties their buying decision to an output market. For monopoly power, the firm sells the goods, makes some money, and that’s the end of the story.

The second difference is that the standard monopoly model looks at one output good at a time. The standard factor-demand model uses two inputs, which introduces a tradeoff between, say, capital and labor. We could force monopoly to look like monopsony by assuming the merging parties each produce two different outputs, apples and bananas. An efficiency gain could favor apple production and hurt banana consumers. While this sort of substitution among outputs is often realistic, it is not the standard economic way of modeling an output market.

We will learn more in the coming weeks about the fate of the proposed American Innovation and Choice Online Act (AICOA), legislation sponsored by Sens. Amy Klobuchar (D-Minn.) and Chuck Grassley (R-Iowa) that would, among other things, prohibit “self-preferencing” by large digital platforms like Google, Amazon, Facebook, Apple, and Microsoft. But while the bill has already been subject to significant scrutiny, a crucially important topic has been absent from that debate: the measure’s likely effect on startup acquisitions. 

Of course, AICOA doesn’t directly restrict startup acquisitions, but the activities it would restrict most certainly do dramatically affect the incentives that drive many startup acquisitions. If a platform is prohibited from engaging in cross-platform integration of acquired technologies, or if it can’t monetize its purchase by prioritizing its own technology, it may lose the motivation to make a purchase in the first place.

This would be a significant loss. As Dirk Auer, Sam Bowman, and I discuss in a recent article in the Missouri Law Review, acquisitions are arguably the most important component in providing vitality to the overall venture ecosystem:  

Startups generally have two methods for achieving liquidity for their shareholders: IPOs or acquisitions. According to the latest data from Orrick and Crunchbase, between 2010 and 2018 there were 21,844 acquisitions of tech startups for a total deal value of $1.193 trillion. By comparison, according to data compiled by Jay R. Ritter, a professor at the University of Florida, there were 331 tech IPOs for a total market capitalization of $649.6 billion over the same period. As venture capitalist Scott Kupor said in his testimony during the FTC’s hearings on “Competition and Consumer Protection in the 21st Century,” “these large players play a significant role as acquirers of venture-backed startup companies, which is an important part of the overall health of the venture ecosystem.”

Moreover, acquisitions by large incumbents are known to provide a crucial channel for liquidity in the venture capital and startup communities: While at one time the source of the “liquidity events” required to yield sufficient returns to fuel venture capital was evenly divided between IPOs and mergers, “[t]oday that math is closer to about 80 percent M&A and about 20 percent IPOs—[with important implications for any] potential actions that [antitrust enforcers] might be considering with respect to the large platform players in this industry.” As investor and serial entrepreneur Leonard Speiser said recently, “if the DOJ starts going after tech companies for making acquisitions, venture investors will be much less likely to invest in new startups, thereby reducing competition in a far more harmful way.” (emphasis added)

Going after self-preferencing may have exactly the same harmful effect on venture investors and competition. 

It’s unclear exactly how the legislation would be applied in any given context (indeed, this uncertainty is one of the most significant problems with the bill, as the ABA Antitrust Section has argued at length). But AICOA is designed, at least in part, to keep large online platforms in their own lanes—to keep them from “leveraging their dominance” to compete against more politically favored competitors in ancillary markets. Indeed, while covered platforms potentially could defend against application of the law by demonstrating that self-preferencing is necessary to “maintain or substantially enhance the core functionality” of the service, no such defense exists for non-core (whatever that means…) functionality, the enhancement of which through self-preferencing is strictly off limits under AICOA.

As I have written (and so have many, many, many, many others), this is terrible policy on its face. But it is also likely to have significant, adverse, indirect consequences for startup acquisitions, given the enormous number of such acquisitions that are outside the covered platforms’ “core functionality.” 

Just take a quick look at a sample of the largest acquisitions made by Apple, Microsoft, Amazon, and Alphabet, for example. (These are screenshots of the first several acquisitions by size drawn from imperfect lists collected by Wikipedia, but for purposes of casual empiricism they are well-suited to give an idea of the diversity of acquisitions at issue):

Apple:

Microsoft:

Amazon:

Alphabet (Google):

Vanishingly few of these acquisitions go to the “core functionalities” of these platforms. Alphabet’s acquisitions, for example, involve (among many other things) cybersecurity; home automation; cloud computing; wearables, smart glasses, and AR hardware; GPS navigation software; communications security; satellite technology; and social gaming. Microsoft’s acquisitions include companies specializing in video games; social networking; software versioning; drawing software; cable television; cybersecurity; employee engagement; and e-commerce. The technologies and applications involved in acquisitions by Apple and Amazon are similarly varied.

Drilling down a bit, consider the companies Alphabet acquired and put to use in the service of Google Maps:

Which, if any, of these companies would Google have purchased if it knew it would be unable to prioritize Maps in its search results? Would Google have invested more than $1 billion in these companies—and likely significantly more in internal R&D to develop Maps—if it had to speculate whether it would be required (or even be able) to prove someday in the future that prioritizing Google Maps results would enhance its core functionality?

What about Xbox? As noted, AICOA’s terms aren’t perfectly clear, so I’m not certain it would apply to Xbox (is Xbox a “website, online or mobile application, operating system, digital assistant, or online service”?). Here are Microsoft’s video-gaming-related purchases:

The vast majority of these (and all of the acquisitions for which Wikipedia has purchase-price information, totaling some $80 billion of investment) involve video games, not the development of hardware or the functionality of the Xbox platform. Would Microsoft have made these investments if it knew it would be prohibited from prioritizing its own games or exclusively using data gleaned through these games to improve its platform? No one can say for certain, but, at the margin, it is absolutely certain that these self-preferencing bills would make such acquisitions less likely.

Perhaps the most obvious—and concerning—example of the problem arises in the context of Google’s Android platform. Google famously gives Android away for free, of course, and makes its operating system significantly open for bespoke use by all comers. In exchange, Google requires that implementers of the Android OS provide some modicum of favoritism to Google’s revenue-generating products, like Search. For all its uncertainty, there is no question that AICOA’s terms would prohibit this self-preferencing. Intentionally or not, it would thus prohibit the way in which Google monetizes Android and thus hopes to recoup some of the—literally—billions of dollars it has invested in the development and maintenance of Android. 

Here are Google’s Android-related acquisitions:

Would Google have bought Android in the first place (to say nothing of subsequent acquisitions and its massive ongoing investment in Android) if it had been foreclosed from adopting its preferred business model to monetize its investment? In the absence of Google bidding for these companies, would they have earned as much from other potential bidders? Would they even have come into existence at all?

Of course, AICOA wouldn’t preclude Google charging device makers for Android and thus raising the price of mobile devices. But that mechanism may not have been sufficient to support Google’s investment in Android, and it would certainly constrain its ability to compete. Even if rules like those proposed by AICOA didn’t undermine Google’s initial purchase of and investment in Android, it is manifestly unclear how forcing Google to adopt a business model that increases consumer prices and constrains its ability to compete head-to-head with Apple’s iOS ecosystem would benefit consumers. (This excellent series of posts—1, 2, 3, 4—by Dirk Auer on the European Commission’s misguided Android decision discusses in detail the significant costs of prohibiting self-preferencing on Android.)

There are innumerable further examples, as well. In all of these cases, it seems clear not only that an AICOA-like regime would diminish competition and reduce consumer welfare across important dimensions, but also that it would impoverish the startup ecosystem more broadly. 

And that may be an even bigger problem. Even if you think, in the abstract, that it would be better for “Big Tech” not to own these startups, there is a real danger that putting that presumption into force would drive down acquisition prices, kill at least some tech-startup exits, and ultimately imperil the initial financing of tech startups. It should go without saying that this would be a troubling outcome. Yet there is no evidence to suggest that AICOA’s proponents have even considered whether the presumed benefits of the bill would be worth this immense cost.

Federal Trade Commission (FTC) Chair Lina Khan missed the mark once again in her May 6 speech on merger policy, delivered at the annual meeting of the International Competition Network (ICN). At a time when the FTC and U.S. Justice Department (DOJ) are presumably evaluating responses to the agencies’ “request for information” on possible merger-guideline revisions (see here, for example), Khan’s recent remarks suggest a predetermination that merger policy must be “toughened” significantly to disincentivize a larger portion of mergers than under present guidance. A brief discussion of Khan’s substantively flawed remarks follows.

Discussion

Khan’s remarks begin with a favorable reference to the tendentious statement from President Joe Biden’s executive order on competition that “broad government inaction has allowed far too many markets to become uncompetitive, with consolidation and concentration now widespread across our economy, resulting in higher prices, lower wages, declining entrepreneurship, growing inequality, and a less vibrant democracy.” The claim that “government inaction” has enabled increased market concentration and reduced competition has been shown to be  inaccurate, and therefore cannot serve as a defensible justification for a substantive change in antitrust policy. Accordingly, Khan’s statement that the executive order “underscores a deep mandate for change and a commitment to creating the enabling environment for reform” rests on foundations of sand.

Khan then shifts her narrative to a consideration of merger policy, stating:

Merger investigations invite us to make a set of predictive assessments, and for decades we have relied on models that generally assumed markets are self-correcting and that erroneous enforcement is more costly than erroneous non-enforcement. Both the experience of the U.S. antitrust agencies and a growing set of empirical research is showing that these assumptions appear to have been at odds with market realities.

Digital Markets

Khan argues, without explanation, that “the guidelines must better account for certain features of digital markets—including zero-price dynamics, the competitive significance of data, and the network externalities that can swiftly lead markets to tip.” She fails to make any showing that consumer welfare has been harmed by mergers involving digital markets, or that the “zero-price” feature is somehow troublesome. Moreover, the reference to “data” as being particularly significant to antitrust analysis appears to ignore research (see here) indicating there is an insufficient basis for having an antitrust presumption involving big data, and that big data (like R&D) may be associated with innovation, which enhances competitive vibrancy.

Khan also fails to note that network externalities are beneficial; when users are added to a digital platform, the platform’s value to other users increases (see here, for example). What’s more (see here), “gateways and multihoming can dissipate any monopoly power enjoyed by large networks[,] … provid[ing] another reason” why network effects may not raise competitive problems. In addition, the implicit notion that “tipping” is a particular problem is belied by the ability of new competitors to “knock off” supposed entrenched digital monopolists (think, for example, of Yahoo being displaced by Google, and Myspace being displaced by Facebook). Finally, a bit of regulatory humility is in order. Given the huge amount of consumer surplus generated by digital platforms (see here, for example), enforcers should be particularly cautious about avoiding more aggressive merger (and antitrust in general) policies that could detract from, rather than enhance, welfare.

Labor Markets

Khan argues that guidelines drafters should “incorporate new learning” embodied in “empirical research [that] has shown that labor markets are highly concentrated” and a “U.S. Treasury [report] recently estimating that a lack of competition may be costing workers up to 20% of their wages.” Unfortunately for Khan’s argument, these claims have been convincingly debunked (see here) in a new study by former FTC economist Julie Carlson (see here). As Carlson carefully explains, labor markets are not highly concentrated and labor-market power is largely due to market frictions (such as occupational licensing), rather than concentration. In a similar vein, a recent article by Richard Epstein stresses that heightened antitrust enforcement in labor markets would involve “high administrative and compliance costs to deal with a largely nonexistent threat.” Epstein points out:

[T]raditional forms of antitrust analysis can perfectly deal with labor markets. … What is truly needed is a close examination of the other impediments to labor, including the full range of anticompetitive laws dealing with minimum wage, overtime, family leave, anti-discrimination, and the panoply of labor union protections, where the gains to deregulation should be both immediate and large.

Nonhorizontal Mergers

Khan notes:

[W]e are looking to sharpen our insights on non-horizontal mergers, including deals that might be described as ecosystem-driven, concentric, or conglomerate. While the U.S. antitrust agencies energetically grappled with some of these dynamics during the era of industrial-era conglomerates in the 1960s and 70s, we must update that thinking for the current economy. We must examine how a range of strategies and effects, including extension strategies and portfolio effects, may warrant enforcement action.

Khan’s statement on non-horizontal mergers once again is fatally flawed.

With regard to vertical mergers (not specifically mentioned by Khan), the FTC abruptly withdrew, without explanation, its approval of the carefully crafted 2020 vertical-merger guidelines. That action offends the rule of law, creating unwarranted and costly business-sector confusion. Khan’s lack of specific reference to vertical mergers does nothing to solve this problem.

With regard to other nonhorizontal mergers, there is no sound economic basis to oppose mergers involving unrelated products. Threatening to do so would have no procompetitive rationale and would threaten to reduce welfare by preventing the potential realization of efficiencies. In a 2020 OECD paper drafted principally by DOJ and FTC economists, the U.S. government meticulously assessed the case for challenging such mergers and rejected it on economic grounds. The OECD paper is noteworthy in its entirely negative assessment of 1960s and 1970s conglomerate cases which Khan implicitly praises in suggesting they merely should be “updated” to deal with the current economy (citations omitted):

Today, the United States is firmly committed to the core values that antitrust law protect competition, efficiency, and consumer welfare rather than individual competitors. During the ten-year period from 1965 to 1975, however, the Agencies challenged several mergers of unrelated products under theories that were antithetical to those values. The “entrenchment” doctrine, in particular, condemned mergers if they strengthened an already dominant firm through greater efficiencies, or gave the acquired firm access to a broader line of products or greater financial resources, thereby making life harder for smaller rivals. This approach is no longer viewed as valid under U.S. law or economic theory. …

These cases stimulated a critical examination, and ultimate rejection, of the theory by legal and economic scholars and the Agencies. In their Antitrust Law treatise, Phillip Areeda and Donald Turner showed that to condemn conglomerate mergers because they might enable the merged firm to capture cost savings and other efficiencies, thus giving it a competitive advantage over other firms, is contrary to sound antitrust policy, because cost savings are socially desirable. It is now recognized that efficiency and aggressive competition benefit consumers, even if rivals that fail to offer an equally “good deal” suffer loss of sales or market share. Mergers are one means by which firms can improve their ability to compete. It would be illogical, then, to prohibit mergers because they facilitate efficiency or innovation in production. Unless a merger creates or enhances market power or facilitates its exercise through the elimination of competition—in which case it is prohibited under Section 7—it will not harm, and more likely will benefit, consumers.

Given the well-reasoned rejection of conglomerate theories by leading antitrust scholars and modern jurisprudence, it would be highly wasteful for the FTC and DOJ to consider covering purely conglomerate (nonhorizontal and nonvertical) mergers in new guidelines. Absent new legislation, challenges of such mergers could be expected to fail in court. Regrettably, Khan appears oblivious to that reality.

Khan’s speech ends with a hat tip to internationalism and the ICN:

The U.S., of course, is far from alone in seeing the need for a course correction, and in certain regards our reforms may bring us in closer alignment with other jurisdictions. Given that we are here at ICN, it is worth considering how we, as an international community, can or should react to the shifting consensus.

Antitrust laws have been adopted worldwide, in large part at the urging of the United States (see here). They remain, however, national laws. One would hope that the United States, which in the past was the world leader in developing antitrust economics and enforcement policy, would continue to seek to retain this role, rather than merely emulate other jurisdictions to join an “international community” consensus. Regrettably, this does not appear to be the case. (Indeed, European Commissioner for Competition Margrethe Vestager made specific reference to a “coordinated approach” and convergence between U.S. and European antitrust norms in a widely heralded October 2021 speech at the annual Fordham Antitrust Conference in New York. And Vestager specifically touted European ex ante regulation as well as enforcement in a May 5 ICN speech that emphasized multinational antitrust convergence.)

Conclusion

Lina Khan’s recent ICN speech on merger policy sends all the wrong signals on merger guidelines revisions. It strongly hints that new guidelines will embody pre-conceived interventionist notions at odds with sound economics. By calling for a dramatically new direction in merger policy, it interjects uncertainty into merger planning. Due to its interventionist bent, Khan’s remarks, combined with prior statements by U.S. Assistant Attorney General Jonathan Kanter (see here) may further serve to deter potentially welfare-enhancing consolidations. Whether the federal courts will be willing to defer to a drastically different approach to mergers by the agencies (one at odds with several decades of a careful evolutionary approach, rooted in consumer welfare-oriented economics) is, of course, another story. Stay tuned.  

A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.

It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:

How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?

Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).

When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.

As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.

The Shaky Foundations of Attention Markets Theory

Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.

  • First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
  • Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
  • Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).

There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:

This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”

Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:

But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.

Tim Wu makes roughly the same argument:

The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.

The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.

None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.

First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.

This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.

This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.

Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.

This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.

Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:

An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.

In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.

In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.

The Bait and Switch: Qualitative Indicia

These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:

Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method

Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.

This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”

This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.

A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences. 

There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching. 

The Way Forward

The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.

As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.

Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.

Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:

The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.

Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.

In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.

U.S. antitrust policy seeks to promote vigorous marketplace competition in order to enhance consumer welfare. For more than four decades, mainstream antitrust enforcers have taken their cue from the U.S. Supreme Court’s statement in Reiter v. Sonotone (1979) that antitrust is “a consumer welfare prescription.” Recent suggestions (see here and here) by new Biden administration Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) leadership that antitrust should promote goals apart from consumer welfare have yet to be embodied in actual agency actions, and they have not been tested by the courts. (Given Supreme Court case law, judicial abandonment of the consumer welfare standard appears unlikely, unless new legislation that displaces it is enacted.)   

Assuming that the consumer welfare paradigm retains its primacy in U.S. antitrust, how do the goals of antitrust match up with those of national security? Consistent with federal government pronouncements, the “basic objective of U.S. national security policy is to preserve and enhance the security of the United States and its fundamental values and institutions.” Properly applied, antitrust can retain its consumer welfare focus in a manner consistent with national security interests. Indeed, sound antitrust and national-security policies generally go hand-in-hand. The FTC and the DOJ should keep that in mind in formulating their antitrust policies (spoiler alert: they sometimes have failed to do so).

Discussion

At first blush, it would seem odd that enlightened consumer-welfare-oriented antitrust enforcement and national-security policy would be in tension. After all, enlightened antitrust enforcement is concerned with targeting transactions that harmfully reduce output and undermine innovation, such as hard-core collusion and courses of conduct that inefficiently exclude competition and weaken marketplace competition. U.S. national security would seem to be promoted (or, at least, not harmed) by antitrust enforcement directed at supporting stronger, more vibrant American markets.

This initial instinct is correct, if antitrust-enforcement policy indeed reflects economically sound, consumer-welfare-centric principles. But are there examples where antitrust enforcement falls short and thereby is at odds with national security? An evaluation of three areas of interaction between the two American policy interests is instructive.

The degree of congruence between national security and appropriate consumer welfare-enhancing antitrust enforcement is illustrated by a brief discussion of:

  1. defense-industry mergers;
  2. the intellectual property-antitrust interface, with a focus on patent licensing; and
  3. proposed federal antitrust legislation.

The first topic presents an example of clear consistency between consumer-welfare-centric antitrust and national defense. In contrast, the second topic demonstrates that antitrust prosecutions (and policies) that inappropriately weaken intellectual-property protections are inconsistent with national defense interests. The second topic does not manifest a tension between antitrust and national security; rather, it illustrates a tension between national security and unsound antitrust enforcement. In a related vein, the third topic demonstrates how a change in the antitrust statutes that would undermine the consumer welfare paradigm would also threaten U.S. national security.

Defense-Industry Mergers

The consistency between antitrust goals and national security is relatively strong and straightforward in the field of defense-industry-related mergers and joint ventures. The FTC and DOJ traditionally have worked closely with the U.S. Defense Department (DOD) to promote competition and consumer welfare in evaluating business transactions that affect national defense needs.

The DOD has long supported policies to prevent overreliance on a single supplier for critical industrial-defense needs. Such a posture is consistent with the antitrust goal of preventing mergers to monopoly that reduce competition, raise prices, and diminish quality by creating or entrenching a dominant firm. As then-FTC Commissioner William Kovacic commented about an FTC settlement that permitted the United Launch Alliance (an American spacecraft launch service provider established in 2006 as a joint venture between Lockheed Martin and Boeing), “[i]n reviewing defense industry mergers, competition authorities and the DOD generally should apply a presumption that favors the maintenance of at least two suppliers for every weapon system or subsystem.”

Antitrust enforcers have, however, worked with DOD to allow the only two remaining suppliers of a defense-related product or service to combine their operations, subject to appropriate safeguards, when presented with scale economy and quality rationales that advanced national-security interests (see here).

Antitrust enforcers have also consulted and found common cause with DOD in opposing anticompetitive mergers that have national-security overtones. For example, antitrust enforcement actions targeting vertical defense-sector mergers that threaten anticompetitive input foreclosure or facilitate anticompetitive information exchanges are in line with the national-security goal of preserving vibrant markets that offer the federal government competitive, high-quality, innovative, and reasonably priced purchase options for its defense needs.

The FTC’s recent success in convincing Lockheed Martin to drop its proposed acquisition of Aerojet Rocketdyne holdings fits into this category. (I express no view on the merits of this matter; I merely cite it as an example of FTC-DOD cooperation in considering a merger challenge.) In its February 2022 press release announcing the abandonment of this merger, the FTC stated that “[t]he acquisition would have eliminated the country’s last independent supplier of key missile propulsion inputs and given Lockheed the ability to cut off its competitors’ access to these critical components.” The FTC also emphasized the full consistency between its enforcement action and national-security interests:

Simply put, the deal would have resulted in higher prices and diminished quality and innovation for programs that are critical to national security. The FTC’s enforcement action in this matter dovetails with the DoD report released this week recommending stronger merger oversight of the highly concentrated defense industrial base.

Intellectual-Property Licensing

Shifts in government IP-antitrust patent-licensing policy perspectives

Intellectual-property (IP) licensing, particularly involving patents, is highly important to the dynamic and efficient dissemination of new technologies throughout the economy, which, in turn, promotes innovation and increased welfare (consumers’ and producers’ surplus). See generally, for example, Daniel Spulber’s The Case for Patents and Jonathan Barnett’s Innovation, Firms, and Markets. Patents are a property right, and they do not necessarily convey market power, as the federal government has recognized (see 2017 DOJ-FTC Antitrust Guidelines for the Licensing of Intellectual Property).

Standard setting through standard setting organizations (SSOs) has been a particularly important means of spawning valuable benchmarks (standards) that have enabled new patent-backed technologies to drive innovation and enable mass distribution of new high-tech products, such as smartphones. The licensing of patents that cover and make possible valuable standards—“standard-essential patents” or SEPs—has played a crucial role in bringing to market these products and encouraging follow-on innovations that have driven fast-paced welfare-enhancing product and process quality improvements.

The DOJ and FTC have recognized specific efficiency benefits of IP licensing in their 2017 Antitrust Guidelines for the Licensing of Intellectual Property, stating (citations deleted):

Licensing, cross-licensing, or otherwise transferring intellectual property (hereinafter “licensing”) can facilitate integration of the licensed property with complementary factors of production. This integration can lead to more efficient exploitation of the intellectual property, benefiting consumers through the reduction of costs and the introduction of new products. Such arrangements increase the value of intellectual property to consumers and owners. Licensing can allow an innovator to capture returns from its investment in making and developing an invention through royalty payments from those that practice its invention, thus providing an incentive to invest in innovative efforts. …

[L]imitations on intellectual property licenses may serve procompetitive ends by allowing the licensor to exploit its property as efficiently and effectively as possible. These various forms of exclusivity can be used to give a licensee an incentive to invest in the commercialization and distribution of products embodying the licensed intellectual property and to develop additional applications for the licensed property. The restrictions may do so, for example, by protecting the licensee against free riding on the licensee’s investments by other licensees or by the licensor. They may also increase the licensor’s incentive to license, for example, by protecting the licensor from competition in the licensor’s own technology in a market niche that it prefers to keep to itself.

Unfortunately, however, FTC and DOJ antitrust policies over the last 15 years have too often belied this generally favorable view of licensing practices with respect to SEPs. (See generally here, here, and here). Notably, the antitrust agencies have at various times taken policy postures and enforcement actions indicating that SEP holders may face antitrust challenges if:

  1. they fail to license all comers, including competitors, on fair, reasonable, and nondiscriminatory (FRAND) terms; and
  2. seek to obtain injunctions against infringers.

In addition, antitrust policy officials (see 2011 FTC Report) have described FRAND price terms as cabined by the difference between the licensing rates for the first (included in the standard) and second (not included in the standard) best competing patented technologies available prior to the adoption of a standard. This pricing measure—based on the “incremental difference” between first and second-best technologies—has been described as necessary to prevent SEP holders from deriving artificial “monopoly rents” that reflect the market power conferred by a standard. (But see then FTC-Commissioner Joshua Wright’s 2013 essay to the contrary, based on the economics of incomplete contracts.)

This approach to SEPs undervalues them, harming the economy. Limitations on seeking injunctions (which are a classic property-right remedy) encourages opportunistic patent infringements and artificially disfavors SEP holders in bargaining over licensing terms with technology implementers, thereby reducing the value of SEPs. SEP holders are further disadvantaged by the presumption that they must license all comers. They also are harmed by the implication that they must be limited to a relatively low hypothetical “ex ante” licensing rate—a rate that totally fails to take into account the substantial economic welfare value that will accrue to the economy due to their contribution to the standard. Considered individually and as a whole, these negative factors discourage innovators from participating in standardization, to the detriment of standards quality. Lower-quality standards translate into inferior standardized produces and processes and reduced innovation.

Recognizing this problem, in 2018 DOJ, Assistant Attorney General for Antitrust Makan Delrahim announced a “New Madison Approach” (NMA) to SEP licensing, which recognized:

  1. antitrust remedies are inappropriate for patent-licensing disputes between SEP-holders and implementers of a standard;
  2. SSOs should not allow collective actions by standard-implementers to disfavor patent holders;
  3. SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents by seeking injunctions; and
  4. unilateral and unconditional decisions not to license a patent should be per se legal. (See, for example, here and here.)

Acceptance of the NMA would have counter-acted the economically harmful degradation of SEPs stemming from prior government policies.

Regrettably, antitrust-enforcement-agency statements during the last year effectively have rejected the NMA. Most recently, in December 2021, the DOJ issued for public comment a Draft Policy Statement on Licensing Negotiations and Remedies, SEPs, which displaces a 2019 statement that had been in line with the NMA. Unless the FTC and Biden DOJ rethink their new position and decide instead to support the NMA, the anti-innovation approach to SEPs will once again prevail, with unfortunate consequences for American innovation.

The “weaker patents” implications of the draft policy statement would also prove detrimental to national security, as explained in a comment on the statement by a group of leading law, economics, and business scholars (including Nobel Laureate Vernon Smith) convened by the International Center for Law & Economics:

China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights. …

Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.

A Center for Security and International Studies submission on the draft policy statement (signed by a former deputy secretary of the DOD, as well as former directors of the U.S. Patent and Trademark Office and the National Institute of Standards and Technology) also raised China-related national-security concerns:

[T]he largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.

Furthermore, in a more general vein, leading innovation economist David Teece also noted the negative national-security implications in his submission on the draft policy statement:

The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation. … Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.

That’s not all. In its public comment warning against precipitous finalization of the draft policy statement, the Innovation Alliance noted that, in recent years, major foreign jurisdictions have rejected the notion that SEP holders should be deprived the opportunity to seek injunctions. The Innovation Alliance opined in detail on the China national-security issues (footnotes omitted):

[T]he proposed shift in policy will undermine the confidence and clarity necessary to incentivize investments in important and risky research and development while simultaneously giving foreign competitors who do not rely on patents to drive investment in key technologies, like China, a distinct advantage. …

The draft policy statement … would devalue SEPs, and undermine the ability of U.S. firms to invest in the research and development needed to maintain global leadership in 5G and other critical technologies.

Without robust American investments, China—which has clear aspirations to control and lead in critical standards and technologies that are essential to our national security—will be left without any competition. Since 2015, President Xi has declared “whoever controls the standards controls the world.” China has rolled out the “China Standards 2035” plan and has outspent the United States by approximately $24 billion in wireless communications infrastructure, while China’s five-year economic plan calls for $400 billion in 5G-related investment.

Simply put, the draft policy statement will give an edge to China in the standards race because, without injunctions, American companies will lose the incentive to invest in the research and development needed to lead in standards setting. Chinese companies, on the other hand, will continue to race forward, funded primarily not by license fees, but by the focused investment of the Chinese government. …

Public hearings are necessary to take into full account the uncertainty of issuing yet another policy on this subject in such a short time period.

A key part of those hearings and further discussions must be the national security implications of a further shift in patent enforceability policy. Our future safety depends on continued U.S. leadership in areas like 5G and artificial intelligence. Policies that undermine the enforceability of patent rights disincentivize the substantial private sector investment necessary for research and development in these areas. Without that investment, development of these key technologies will begin elsewhere—likely China. Before any policy is accepted, key national-security stakeholders in the U.S. government should be asked for their official input.

These are not the only comments that raised the negative national-security ramifications of the draft policy statement (see here and here). For example, current Republican and Democratic senators, prior International Trade Commissioners, and former top DOJ and FTC officials also noted concerns. What’s more, the Patent Protection Society of China, which represents leading Chinese corporate implementers, filed a rather nonanalytic submission in favor of the draft statement. As one leading patent-licensing lawyer explains: “UC Berkley Law Professor Mark Cohen, whose distinguished government service includes serving as the USPTO representative in China, submitted a thoughtful comment explaining how the draft Policy Statement plays into China’s industrial and strategic interests.”

Finally, by weakening patent protection, the draft policy statement is at odds with  the 2021 National Security Commission on Artificial Intelligence Report, which called for the United States to “[d]evelop and implement national IP policies to incentivize, expand, and protect emerging technologies[,]” in response to Chinese “leveraging and exploiting intellectual property (IP) policies as a critical tool within its national strategies for emerging technologies.”

In sum, adoption of the draft policy statement would raise antitrust risks, weaken key property rights protections for SEPs, and undercut U.S. technological innovation efforts vis-à-vis China, thereby undermining U.S. national security.

FTC v. Qualcomm: Misguided enforcement and national security

U.S. national-security interests have been threatened by more than just the recent SEP policy pronouncements. In filing a January 2017 antitrust suit (at the very end of the Obama administration) against Qualcomm’s patent-licensing practices, the FTC (by a partisan 2-1 vote) ignored the economic efficiencies that underpinned this highly successful American technology company’s practices. Had the suit succeeded, U.S. innovation in a critically important technology area would have needlessly suffered, with China as a major beneficiary. A recent Federalist Society Regulatory Transparency Project report on the New Madison Approach underscored the broad policy implications of FTC V. Qualcomm (citations deleted):

The FTC’s Qualcomm complaint reflected the anti-SEP bias present during the Obama administration. If it had been successful, the FTC’s prosecution would have seriously undermined the freedom of the company to engage in efficient licensing of its SEPs.

Qualcomm is perhaps the world’s leading wireless technology innovator. It has developed, patented, and licensed key technologies that power smartphones and other wireless devices, and continues to do so. Many of Qualcomm’s key patents are SEPs subject to FRAND, directed to communications standards adopted by wireless devices makers. Qualcomm also makes computer processors and chips embodied in cutting edge wireless devices. Thanks in large part to Qualcomm technology, those devices have improved dramatically over the last decade, offering consumers a vast array of new services at a lower and lower price, when quality is factored in. Qualcomm thus is the epitome of a high tech American success story that has greatly benefited consumers.

Qualcomm: (1) sells its chips to “downstream” original equipment manufacturers (OEMs, such as Samsung and Apple), on the condition that the OEMs obtain licenses to Qualcomm SEPs; and (2) refuses to license its FRAND-encumbered SEPs to rival chip makers, while allowing those rivals to create and sell chips embodying Qualcomm SEP technologies to those OEMS that have entered a licensing agreement with Qualcomm.

The FTC’s 2017 antitrust complaint, filed in federal district court in San Francisco, charged that Qualcomm’s “no license, no chips” policy allegedly “forced” OEM cell phone manufacturers to pay elevated royalties on products that use a competitor’s baseband processors. The FTC deemed this an illegal “anticompetitive tax” on the use of rivals’ processors, since phone manufacturers “could not run the risk” of declining licenses and thus losing all access to Qualcomm’s processors (which would be needed to sell phones on important cellular networks). The FTC also argued that Qualcomm’s refusal to license its rivals despite its SEP FRAND commitment violated the antitrust laws. Finally, the FTC asserted that a 2011-2016 Qualcomm exclusive dealing contract with Apple (in exchange for reduced patent royalties) had excluded business opportunities for Qualcomm competitors.

The federal district court held for the FTC. It ordered that Qualcomm end these supposedly anticompetitive practices and renegotiate its many contracts. [Among the beneficiaries of new pro-implementer contract terms would have been a leading Chinese licensee of Qualcomm’s, Huawei, the huge Chinese telecommunications company that has been accused by the U.S. government of using technological “back doors” to spy on the United States.]

Qualcomm appealed, and in August 2020 a panel of the Ninth Circuit Court of Appeals reversed the district court, holding for Qualcomm. Some of the key points underlying this holding were: (1) Qualcomm had no antitrust duty to deal with competitors, consistent with established Supreme Court precedent (a very narrow exception to this precedent did not apply); (2) Qualcomm’s rates were chip supplier neutral because all OEMs paid royalties, not just rivals’ customers; (3) the lower court failed to show how the “no license, no chips” policy harmed Qualcomm’s competitors; and (4) Qualcomm’s agreements with Apple did not have the effect of substantially foreclosing the market to competitors. The Ninth Circuit as a whole rejected the FTC’s “en banc” appeal for review of the panel decision.

The appellate decision in Qualcomm largely supports pillar four of the NMA, that unilateral and unconditional decisions not to license a patent should be deemed legal under the antitrust laws. More generally, the decision evinces a refusal to find anticompetitive harm in licensing markets without hard empirical support. The FTC and the lower court’s findings of “harm” had been essentially speculative and anecdotal at best. They had ignored the “big picture” that the markets in which Qualcomm operates had seen vigorous competition and the conferral of enormous and growing welfare benefits on consumers, year-by-year. The lower court and the FTC had also turned a deaf ear to a legitimate efficiency-related business rationale that explained Qualcomm’s “no license, no chips” policy – a fully justifiable desire to obtain a fair return on Qualcomm’s patented technology.

Qualcomm is well reasoned, and in line with sound modern antitrust precedent, but it is only one holding. The extent to which this case’s reasoning proves influential in other courts may in part depend on the policies advanced by DOJ and the FTC going forward. Thus, a preliminary examination of the Biden administration’s emerging patent-antitrust policy is warranted. [Subsequent discussion shows that the Biden administration apparently has rejected pro-consumer policies embodied in the 9th U.S. Circuit’s Qualcomm decision and in the NMA.]

Although the 9th Circuit did not comment on them, national-security-policy concerns weighed powerfully against the FTC v. Qualcomm suit. In a July 2019 Statement of Interest (SOI) filed with the circuit court, DOJ cogently set forth the antitrust flaws in the district court’s decision favoring the FTC. Furthermore, the SOI also explained that “the public interest” favored a stay of the district court holding, due to national-security concerns (described in some detail in statements by the departments of Defense and Energy, appended to the SOI):

[T]he public interest also takes account of national security concerns. Winter v. NRDC, 555 U.S. 7, 23-24 (2008). This case presents such concerns. In the view of the Executive Branch, diminishment of Qualcomm’s competitiveness in 5G innovation and standard-setting would significantly impact U.S. national security. A251-54 (CFIUS); LD ¶¶10-16 (Department of Defense); ED ¶¶9-10 (Department of Energy). Qualcomm is a trusted supplier of mission-critical products and services to the Department of Defense and the Department of Energy. LD ¶¶5-8; ED ¶¶8-9. Accordingly, the Department of Defense “is seriously concerned that any detrimental impact on Qualcomm’s position as global leader would adversely affect its ability to support national security.” LD ¶16.

The [district] court’s remedy [requiring the renegotiation of Qualcomm’s licensing contracts] is intended to deprive, and risks depriving, Qualcomm of substantial licensing revenue that could otherwise fund time-sensitive R&D and that Qualcomm cannot recover later if it prevails. See, e.g., Op. 227-28. To be sure, if Qualcomm ultimately prevails, vacatur of the injunction will limit the severity of Qualcomm’s revenue loss and the consequent impairment of its ability to perform functions critical to national security. The Department of Defense “firmly believes,” however, “that any measure that inappropriately limits Qualcomm’s technological leadership, ability to invest in [R&D], and market competitiveness, even in the short term, could harm national security. The risks to national security include the disruption of [the Department’s] supply chain and unsure U.S. leadership in 5G.” LD ¶3. Consequently, the public interest necessitates a stay pending this Court’s resolution of the merits. In these rare circumstances, the interest in preventing even a risk to national security—“an urgent objective of the highest order”—presents reason enough not to enforce the remedy immediately. Int’l Refugee Assistance Project, 137 S. Ct. at 2088 (internal quotations omitted).

Not all national-security arguments against antitrust enforcement may be well-grounded, of course. The key point is that the interests of national security and consumer-welfare-centric antitrust are fully aligned when antitrust suits would inefficiently undermine the competitive vigor of a firm or firms that play a major role in supporting U.S. national-security interests. Such was the case in FTC v. Qualcomm. More generally, heightened antitrust scrutiny of efficient patent-licensing practices (as threatened by the Biden administration) would tend to diminish innovation by U.S. patentees, particularly in areas covered by standards that are key to leading global technologies. Such a diminution in innovation will tend to weaken American advantages in important industry sectors that are vital to U.S. national-security interests.

Proposed Federal Antitrust Legislation

Proposed federal antitrust legislation being considered by Congress (see here, here, and here for informed critiques) would prescriptively restrict certain large technology companies’ business transactions. If enacted, such legislation would thereby preclude case-specific analysis of potential transaction-specific efficiencies, thereby undermining the consumer welfare standard at the heart of current sound and principled antitrust enforcement. The legislation would also be at odds with our national-security interests, as a recent U.S. Chamber of Commerce paper explains:

Congress is considering new antitrust legislation which, perversely, would weaken leading U.S. technology companies by crafting special purpose regulations under the guise of antitrust to prohibit those firms from engaging in business conduct that is widely acceptable when engaged in by rival competitors.

A series of legislative proposals – some of which already have been approved by relevant Congressional committees – would, among other things: dismantle these companies; prohibit them from engaging in significant new acquisitions or investments; require them to disclose sensitive user data and sensitive IP and trade secrets to competitors, including those that are foreign-owned and controlled; facilitate foreign influence in the United States; and compromise cybersecurity.  These bills would fundamentally undermine American security interests while exempting from scrutiny Chinese and other foreign firms that do not meet arbitrary user and market capitalization thresholds specified in the legislation. …

The United States has never used legislation to punish success. In many industries, scale is important and has resulted in significant gains for the American economy, including small businesses.  U.S. competition law promotes the interests of consumers, not competitors. It should not be used to pick winners and losers in the market or to manage competitive outcomes to benefit select competitors.  Aggressive competition benefits consumers and society, for example by pushing down prices, disrupting existing business models, and introducing innovative products and services.

If enacted, the legislative proposals would drag the United States down in an unfolding global technological competition.  Companies captured by the legislation would be required to compete against integrated foreign rivals with one hand tied behind their backs.  Those firms that are the strongest drivers of U.S. innovation in AI, quantum computing, and other strategic technologies would be hamstrung or even broken apart, while foreign and state-backed producers of these same technologies would remain unscathed and seize the opportunity to increase market share, both in the U.S. and globally. …

Instead of warping antitrust law to punish a discrete group of American companies, the U.S. government should focus instead on vigorous enforcement of current law and on vocally opposing and effectively countering foreign regimes that deploy competition law and other legal and regulatory methods as industrial policy tools to unfairly target U.S. companies.  The U.S. should avoid self-inflicted wounds to our competitiveness and national security that would result from turning antitrust into a weapon against dynamic and successful U.S. firms.      

Consistent with this analysis, former Obama administration Defense Secretary Leon Panetta and former Trump administration Director of National Intelligence Dan Coats argued in a letter to U.S. House leadership (see here) that “imposing severe restrictions solely on U.S. giants will pave the way for a tech landscape dominated by China — echoing a position voiced by the Big Tech companies themselves.”

The national-security arguments against current antitrust legislative proposals, like the critiques of the unfounded FTC v. Qualcomm case, represent an alignment between sound antitrust policy and national-security analysis. Unfounded antitrust attacks on efficient business practices by large firms that help maintain U.S. technological leadership in key areas undermine both principled antitrust and national security.

Conclusion

Enlightened antitrust enforcement, centered on consumer welfare, can and should be read in a manner that is harmonious with national-security interests.

The cooperation between U.S. federal antitrust enforcers and the DOD in assessing defense-industry mergers and joint ventures is, generally speaking, an example of successful harmonization. This success reflects the fact that antitrust enforcers carry out their reviews of those transactions with an eye toward accommodating efficiencies that advance defense goals without sacrificing consumer welfare. Close antitrust-agency consultation with DOD is key to that approach.

Unfortunately, federal enforcement directed toward efficient intellectual-property licensing, as manifested in the Qualcomm case, reflects a disharmony between antitrust and national security. This disharmony could be eliminated if DOJ and the FTC adopted a dynamic view of intellectual property and the substantial economic-welfare benefits that flow from restrictive patent-licensing transactions.

In sum, a dynamic analysis reveals that consumer welfare is enhanced, not harmed, by not subjecting such licensing arrangements to antitrust threat. A more permissive approach to licensing is thus consistent with principled antitrust and with the national security interest of protecting and promoting strong American intellectual property (and, in particular, patent) protection. The DOJ and the FTC should keep this in mind and make appropriate changes to their IP-antitrust policies forthwith.

Finally, proposed federal antitrust legislation would bring about statutory changes that would simultaneously displace consumer welfare considerations and undercut national security interests. As such, national security is supported by rejecting unsound legislation, in order to keep in place consumer-welfare-based antitrust enforcement.

The acceptance and implementation of due-process standards confer a variety of welfare benefits on society. As Christopher Yoo, Thomas Fetzer, Shan Jiang, and Yong Huang explain, strong procedural due-process protections promote: (1) compliance with basic norms of impartiality; (2) greater accuracy of decisions; (3) stronger economic growth; (4) increased respect for government; (5) better compliance with the law; (6) better control of the bureaucracy; (7) restraints on the influence of special-interest groups; and (8) reduced corruption.  

Recognizing these benefits (and consistent with the long Anglo-American tradition of recognizing due-process rights that dates back to Magna Carta), the U.S. government (USG) has long been active in advancing the adoption of due-process principles by competition-law authorities around the world, working particularly through the Organisation for Economic Co-operation and Development (OECD) and the International Competition Network (ICN). More generally, due process may be seen as an aspect of the rule of law, which is as important in antitrust as in other legal areas.

The USG has supported OECD Competition Committee work on due-process safeguards which began in 2010, and which culminated in the OECD ministers’ October 2021 adoption of a “Recommendation on Transparency and Procedural Fairness in Competition Law Enforcement.” This recommendation calls for: (1) competition and predictability in competition-law enforcement; (2) independence, impartiality, and professionalism of competition authorities; (3) non-discrimination, proportionality, and consistency in the treatment of parties subject to scrutiny; (4) timeliness in handling cases; (5) meaningful engagement with parties (including parties’ right to respond and be heard); (6) protection of confidential and privileged information; (7) impartial judicial review of enforcement decisions; and (8) periodic review of policies, rules, procedures, and guidelines, to ensure that they are aligned with the preceding seven principles.

The USG has also worked through the International Competition Network (ICN) to generate support for the acceptance of due-process principles by ICN member competition agencies and their governments. In describing ICN due-process initiatives, James Rill and Jana Seidl have explained that “[t]he current challenge is to determine the extent to which the ICN, as a voluntary organization, can or should establish mechanisms to evaluate implementation of … [due process] norms by its members and even non-members.”

In 2019, the ICN announced creation of a Framework for Competition Agency Procedures (CAP), open to both ICN and non-ICN national and multinational (most prominently, the EU’s Directorate General for Competition) competition agencies. The CAP essentially embodied the principles of a June 2018 U.S. Justice Department (DOJ) framework proposal. A September 2021 CAP Report (footnotes omitted) issued at an ICN steering-group meeting noted that the CAP had 73 members, and summarized the history and goals of the CAP as follows:

The ICN CAP is a non-binding, opt-in framework. It makes use of the ICN infrastructure to maximize visibility and impact while minimizing the administrative burden for participants that operate in different legal regimes and enforcement systems with different resource constraints. The ICN CAP promotes agreement among competition agencies worldwide on fundamental procedural norms. The Multilateral Framework for Procedures project, launched by the US Department of Justice in 2018, was the starting point for what is now the ICN CAP.

The ICN CAP rests on two pillars: the first pillar is a catalogue of fundamental, consensus principles for fair and effective agency procedures that reflect the broad consensus within the global competition community. The principles address: non-discrimination, transparency, notice of investigations, timely resolution, confidentiality protections, conflicts of interest, opportunity to defend, representation, written decisions, and judicial review.

The second pillar of the ICN CAP consists of two processes: the “CAP Cooperation Process,” which facilitates a dialogue between participating agencies, and the “CAP Review Process,” which enhances transparency about the rules governing participants’ investigation and enforcement procedures.

The ICN CAP template is the practical implementation tool for the CAP. Participants each submit CAP templates, outlining how their agencies adhere to each of the CAP principles. The templates allow participants to share and explain important features of their systems, including links and other references to related materials such as legislation, rules, regulations, and guidelines. The CAP templates are a useful resource for agencies to consult when they would like to gain a quick overview of other agencies’ procedures, benchmark with peer agencies, and develop new processes and procedures.

Through the two pillars and the template, the CAP provides a framework for agencies to affirm the importance of the CAP principles, to confer with other jurisdictions, and to illustrate how their regulations and guidelines adhere to those principles.

In short, the overarching goal of the ICN CAP is to give agencies a “nudge” to implement due-process principles by encouraging consultation with peer CAP members and exposing to public view agencies’ actual due-process record. The extent to which agencies will prove willing to strengthen their commitment to due process because of the CAP, or even join the CAP, remains to be seen. (China’s competition agency, the State Administration for Market Regulation (SAMR), has not joined the ICN CAP.)

Antitrust, Due Process, and the Rule of Law at the DOJ and the FTC  

Now that the ICN CAP and OECD recommendation are in place, it is important that the DOJ and Federal Trade Commission (FTC), as long-time international promoters of due process, lead by example in adhering to all of those multinational instruments’ principles. A failure to do so would, in addition to having negative welfare consequences for affected parties (and U.S. economic welfare), undermine USG international due-process advocacy. Less effective advocacy efforts could, of course, impose additional costs on American businesses operating overseas, by subjecting them to more procedurally defective foreign antitrust prosecutions than otherwise.

With those considerations in mind, let us briefly examine the current status of due-process protections afforded by the FTC and DOJ. Although traditionally robust procedural safeguards remain strong overall, some worrisome developments during the first year of the Biden administration merit highlighting. Those developments implicate classic procedural issues and some broader rule of law concerns. (This commentary does not examine due-process and rule-of-law issues associated with U.S. antitrust enforcement at the state level, a topic that warrants scrutiny as well.)

The FTC

  • New FTC leadership has taken several actions that have unfortunate due-process and rule-of-law implications (many of them through highly partisan 3-2 commission votes featuring strong dissents).

Consider the HSR Act, a Congressional compromise that gave enforcers advance notice of deals and parties the benefit of repose. HSR review [at the FTC] now faces death by a thousand cuts. We have hit month nine of a “temporary” and “brief” suspension of early termination. Letters are sent to parties when their waiting periods expire, warning them to close at their own risk. Is the investigation ongoing? Is there a set amount of time the parties should wait? No one knows! The new prior approval policy will flip the burden of proof and capture many deals below statutory thresholds. And sprawling investigations covering non-competition concerns exceed our Clayton Act authority.

These policy changes impose a gratuitous tax on merger activity – anticompetitive and procompetitive alike. There are costs to interfering with the market for corporate control, especially as we attempt to rebound from the pandemic. If new leadership wants the HSR Act rewritten, they should persuade Congress to amend it rather than taking matters into their own hands.

Uncertainty and delay surrounding merger proposals and new merger-review processes that appear to flaunt tension with statutory commands are FTC “innovations” that are in obvious tension with due-process guarantees.

  • FTC rulemaking initiatives have due-process and rule-of-law problems. As Commissioner Wilson noted (footnotes omitted), “[t]he [FTC] majority changed our rules of practice to limit stakeholder input and consolidate rulemaking power in the chair’s office. In Commissioner [Noah] Phillips’ words, these changes facilitate more rules, but not better ones.” Lack of stakeholder input offends due process. Even more serious, however, is the fact that far-reaching FTC competition rules are being planned (see the December 2021 FTC Statement of Regulatory Priorities). FTC competition rulemaking is likely beyond its statutory authority and would fail a cost-benefit analysis (see here). Moreover, even if competition rules survived, they would offend the rule of law (see here) by “lead[ing] to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency.”
  • The FTC’s July 2021 withdrawal of its 2015 “Statement of Enforcement Principles Regarding ‘Unfair Methods of Competition’ [UMC] Under Section 5 of the FTC Act” likewise undercuts the rule of law (see here). The 2015 Statement had tended to increase predictability in enforcement by tying the FTC’s exercise of its UMC authority to well-understood antitrust rule-of-reason principles and the generally accepted consumer welfare standard. By withdrawing the statement (over the dissents of Commissioners Wilson and Phillips) without promulgating a new policy, the FTC majority reduced enforcement guidance and generated greater legal uncertainty. The notion that the FTC may apply the UMC concept in an unbounded fashion lacks legal principle and threatens to chill innovative and welfare-enhancing business conduct.
  • Finally, the FTC’s abrupt September 2021 withdrawal of its approval of jointly issued 2020 DOJ-FTC Vertical Merger Guidelines (again over a dissent by Commissioners Wilson and Phillips), offends the rule of law in three ways. As Commissioner Wilson explains, it engenders confusion as to FTC policies regarding vertical-merger analysis going forward; it appears to reflect flawed economic thinking regarding vertical integration (which may in turn lead to enforcement error); and it creates a potential tension between DOJ and FTC approaches to vertical acquisitions (the third concern may disappear if and when DOJ and FTC agree to new merger guidelines).  

The DOJ

As of now, the Biden administration DOJ has not taken as many actions that implicate rule-of-law and due-process concerns. Two recent initiatives with significant rule-of-law implications, however, deserve mention.

  • First, on Dec. 6, 2021, DOJ suddenly withdrew a 2019 policy statement on “Licensing Negotiations and Remedies for Standards-Essential Patents Subject to Voluntary F/RAND Commitments.” In so doing, DOJ simultaneously released a new draft policy statement on the same topic, and requested public comments. The timing of the withdrawal was peculiar, since the U.S. Patent and Trademark Office (PTO) and the National Institute of Standards and Technology (NIST)—who had joined with DOJ in the 2019 policy statement (which itself had replaced a 2013 policy statement)—did not yet have new Senate-confirmed leadership and were apparently not involved in the withdrawal. What’s more, DOJ originally requested that public comments be filed by the beginning of January, a ridiculously short amount of time for such a complex topic. (It later relented and established an early February deadline.) More serious than these procedural irregularities, however, are two new features of the Draft Policy Statement: (1) its delineation of a suggested private-negotiation framework for patent licensing; and (2) its assertion that standard essential patent (SEP) holders essentially forfeit the right to seek an injunction. These provisions, though not binding, may have a coercive effect on some private negotiators, and they problematically insert the government into matters that are appropriately the province of private businesses and the courts. Such an involvement by government enforcers in private negotiations, which treats one category of patents (SEPs) less favorably than others, raises rule-of-law questions.
  • Second, in January 2018, DOJ and the FTC jointly issued a “Request for Information on Merger Enforcement” [RIF] that contemplated the issuance of new merger guidelines (see my recent analysis, here). The RIF was chock full of numerous queries to prospective commentators that generally reflected a merger-skeptical tone. This suggests a predisposition to challenge mergers that, if embodied in guidelines language, could discourage some (or perhaps many) non-problematic consolidations from being proposed. New merger guidelines that impliedly were anti-merger would be a departure from previous guidelines, which stated in neutral fashion that they would consider both the anticompetitive risks and procompetitive benefits of mergers being reviewed. A second major concern is that the enforcement agencies might produce long and detailed guidelines containing all or most of the many theories of competitive harm found in the RIF. Overly complex guidelines would not produce any true guidance to private parties, inconsistent with the principle that individuals should be informed what the law is. Such guidelines also would give enforcers greater flexibility to selectively pick and choose theories best suited to block particular mergers. As such, the guidelines might be viewed by judges as justifications for arbitrary, rather than principled, enforcement, at odds with the rule of law.    

Conclusion

No man is an island entire of itself.” In today’s world of multinational antitrust cooperation, the same holds true for competition agencies. Efforts to export due process in competition law, which have been a USG priority for many years, will inevitably falter if other jurisdictions perceive the FTC and DOJ as not practicing what they preach.

It is to be hoped that the FTC and DOJ will take into account this international dimension in assessing the merits of antitrust “reforms” now under consideration. New enforcement policies that sow delay and uncertainty undermine the rule of law and are inconsistent with due-process principles. The consumer welfare harm that may flow from such deficient policies may be substantial. The agency missteps identified above should be rectified and new polices that would weaken due-process protections and undermine the rule of law should be avoided.              

President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to further demonstrate a serious intent to pursue these objectives.

Yet several actions taken by federal agencies threaten to undermine the intellectual-property rights and transactional structures that have driven the exceptional performance of U.S. firms in key areas of the global innovation economy. These regulatory missteps together represent a policy “lose-lose” that lacks any sound basis in innovation economics and threatens U.S. leadership in mission-critical technology sectors.

Life Sciences: USTR Campaigns Against Intellectual-Property Rights

In the pharmaceutical sector, the administration’s signature action has been an unprecedented campaign by the Office of the U.S. Trade Representative (USTR) to block enforcement of patents and other intellectual-property rights held by companies that have broken records in the speed with which they developed and manufactured COVID-19 vaccines on a mass scale.

Patents were not an impediment in this process. To the contrary: they were necessary predicates to induce venture-capital investment in a small firm like BioNTech, which undertook drug development and then partnered with the much larger Pfizer to execute testing, production, and distribution. If success in vaccine development is rewarded with expropriation, this vital public-health sector is unlikely to attract investors in the future. 

Contrary to increasingly common assertions that the Bayh-Dole Act (which enables universities to seek patents arising from research funded by the federal government) “robs” taxpayers of intellectual property they funded, the development of Covid-19 vaccines by scientist-founded firms illustrates how the combination of patents and private capital is essential to convert academic research into life-saving medical solutions. The biotech ecosystem has long relied on patents to structure partnerships among universities, startups, and large firms. The costly path from lab to market relies on a secure property-rights infrastructure to ensure exclusivity, without which no investor would put capital at stake in what is already a high-risk, high-cost enterprise.  

This is not mere speculation. During the decades prior to the Bayh-Dole Act, the federal government placed strict limitations on the ability to patent or exclusively license innovations arising from federally funded research projects. The result: the market showed little interest in making the investment needed to convert those innovations into commercially viable products that might benefit consumers. This history casts great doubt on the wisdom of the USTR’s campaign to limit the ability of biopharmaceutical firms to maintain legal exclusivity over certain life sciences innovations.

Genomics: FTC Attempts to Block the Illumina/GRAIL Acquisition

In the genomics industry, the Federal Trade Commission (FTC) has devoted extensive resources to oppose the acquisition by Illumina—the market leader in next-generation DNA-sequencing equipment—of a medical-diagnostics startup, GRAIL (an Illumina spinoff), that has developed an early-stage cancer screening test.

It is hard to see the competitive threat. GRAIL is a pre-revenue company that operates in a novel market segment and its diagnostic test has not yet received approval from the Food and Drug Administration (FDA). To address concerns over barriers to potential competitors in this nascent market, Illumina has committed to 12-year supply contracts that would bar price increases or differential treatment for firms that develop oncology-detection tests requiring use of the Illumina platform.

One of Illumina’s few competitors in the global market is the BGI Group, a China-based company that, in 2013, acquired Complete Genomics, a U.S. target that Illumina pursued but relinquished due to anticipated resistance from the FTC in the merger-review process.  The transaction was then cleared by the Committee on Foreign Investment in the United States (CFIUS).

The FTC’s case against Illumina’s re-acquisition of GRAIL relies on theoretical predictions of consumer harm in a market that is not yet operational. Hypothetical market failure scenarios may suit an academic seminar but fall well below the probative threshold for antitrust intervention. 

Most critically, the Illumina enforcement action places at-risk a key element of well-functioning innovation ecosystems. Economies of scale and network effects lead technology markets to converge on a handful of leading platforms, which then often outsource research and development by funding and sometimes acquiring smaller firms that develop complementary technologies. This symbiotic relationship encourages entry and benefits consumers by bringing new products to market as efficiently as possible. 

If antitrust interventions based on regulatory fiat, rather than empirical analysis, disrupt settled expectations in the M&A market that innovations can be monetized through acquisition transactions by larger firms, venture capital may be unwilling to fund such startups in the first place. Independent development or an initial public offering are often not feasible exit options. It is likely that innovation will then retreat to the confines of large incumbents that can fund research internally but often execute it less effectively. 

Wireless Communications: DOJ Takes Aim at Standard-Essential Patents

Wireless communications stand at the heart of the global transition to a 5G-enabled “Internet of Things” that will transform business models and unlock efficiencies in myriad industries.  It is therefore of paramount importance that policy actions in this sector rest on a rigorous economic basis. Unfortunately, a recent policy shift proposed by the U.S. Department of Justice’s (DOJ) Antitrust Division does not meet this standard.

In December 2021, the Antitrust Division released a draft policy statement that would largely bar owners of standard-essential patents from seeking injunctions against infringers, which are usually large device manufacturers. These patents cover wireless functionalities that enable transformative solutions in myriad industries, ranging from communications to transportation to health care. A handful of U.S. and European firms lead in wireless chip design and rely on patent licensing to disseminate technology to device manufacturers and to fund billions of dollars in research and development. The result is a technology ecosystem that has enjoyed continuous innovation, widespread user adoption, and declining quality-adjusted prices.

The inability to block infringers disrupts this equilibrium by signaling to potential licensees that wireless technologies developed by others can be used at-will, with the terms of use to be negotiated through costly and protracted litigation. A no-injunction rule would discourage innovation while encouraging delaying tactics favored by well-resourced device manufacturers (including some of the world’s largest companies by market capitalization) that occupy bottleneck pathways to lucrative retail markets in the United States, China, and elsewhere.

Rather than promoting competition or innovation, the proposed policy would simply transfer wealth from firms that develop new technologies at great cost and risk to firms that prefer to use those technologies at no cost at all. This does not benefit anyone other than device manufacturers that already capture the largest portion of economic value in the smartphone supply chain.

Conclusion

From international trade to antitrust to patent policy, the administration’s actions imply little appreciation for the property rights and contractual infrastructure that support real-world innovation markets. In particular, the administration’s policies endanger the intellectual-property rights and monetization pathways that support market incentives to invest in the development and commercialization of transformative technologies.

This creates an inviting vacuum for strategic rivals that are vigorously pursuing leadership positions in global technology markets. In industries that stand at the heart of the knowledge economy—life sciences, genomics, and wireless communications—the administration is on a counterproductive trajectory that overlooks the business realities of technology markets and threatens to push capital away from the entrepreneurs that drive a robust innovation ecosystem. It is time to reverse course.

The Federal Trade Commission (FTC) on Dec. 2 filed an administrative complaint to block the vertical merger between Nvidia Corp., a graphics chip supplier, and Arm Ltd., a computing-processor designer. The press release accompanying the complaint stresses the allegation that “the combined firm would have the means and incentive to stifle innovative next-generation technologies, including those used to run datacenters and driver-assistance systems in cars.” According to the FTC:

Because Arm’s technology is a critical input that enables competition between Nvidia and its competitors in several markets, the complaint alleges that the proposed merger would give Nvidia the ability and incentive to use its control of this technology to undermine its competitors, reducing competition and ultimately resulting in reduced product quality, reduced innovation, higher prices, and less choice, harming the millions of Americans who benefit from Arm-based products[.]

Assuming the merger proposal is not dropped (it also faces tough sledding in the European Union and the United Kingdom), findings of fact developed at the FTC administrative trial scheduled to begin next August will shed light on the robustness of the complaint’s allegations. Without waiting that long, however, and without commenting on the FTC’s theory of competitive harm, it is useful to take stock of the substantial efficiencies that may be associated with the merger, which can be gleaned from the public record. (The following discussion draws primarily on four sources, see here, here, here, and here.)

The Proposed Merger and Its Efficiencies

Arm has been a key player in the development of next generation processors for the better of the last 30 years. Arm-based processors can be found in most mobile devices, from mobile phones and tablets to some computers. Their ubiquity stems from their power, efficiency, high speed, and low cost. Part of this low cost comes from Arm’s licensing scheme, whereby Arm itself does not produce or deliver any semiconductors. Rather, it licenses their intellectual property to other businesses, allowing those businesses great freedom of implementation in return for zero manufacturing risk for Arm. This means that neither consumers nor businesses can buy an Arm processor to put into their computers, and there is no such thing as an Arm-branded processor. Companies use Arm’s technology to develop, refine, and manufacture their own processors.

Nvidia, also a long-time player in the microprocessor space, takes a decidedly different approach to the semiconductor market, manufacturing and selling its devices both to end users and business alike. Nvidia graphics cards (GPUs) are integrated into various computing machines, from consumer laptops to data-center servers, and all carry Nvidia branding. This approach places significantly greater manufacturing risk on Nvidia but allows for significantly greater control over the integration and operation of their products. Since Nvidia undertakes development of optimization and compatibility in-house, it can ensure that its GPU technology works similarly across devices, a step that Arm does not take.

Additionally, there are two ways in which outside companies may interact with Arm’s IP. The first involves buying the right to produce a stock processor and modifying it to suit the business’ needs. This is the less expensive option and allows businesses to undertake the bare minimum of research and development to make their product work. Arm supplies them with the specifications to manufacture the processor, but the optimization and compatibility testing is the responsibility of the end business.

The second avenue is by purchasing what is known as an “architectural license,” giving the business rights to the underlying processor technology and coding language, but not a processor design. In those cases, the end business designs a processor from scratch, optimizing and integrating as it goes along to make sure the processor is a perfect fit for its device. While this integration generally leads to better results for the consumer, this method requires significantly higher research and development costs, leading to higher prices for the device. 

The second avenue enables businesses to significantly advance the capabilities of their processors beyond what is achievable though an Arm-specific design. Since Arm generally focuses on CPU technology, the integration of the additional components to make the computer work—like the motherboard, hard drives, and GPU—are left to the business. In many cases, these components are pieced together from various sources and may be poorly integrated, leading to lower-powered machines with inferior battery life.

However, businesses like Apple and Samsung have taken advantage of architectural licenses to use Arm processor technology and fully integrate all necessary components to work together seamlessly. This can improve battery life, speed, and efficiency in ways that off-the-shelf components are not capable of achieving. This fully integrated system, called a system on chip (SoC) design, advances computing far beyond Arm’s current offerings and presents a significant competitive threat to the processor market.

Given these circumstances, vertical integration of Arm with Nvidia may present both significant efficiencies and new competition in processor markets. Nvidia, with its expertise in manufacturing and designing integrated systems, may benefit from bringing Arm’s processor design in-house. It could save on licensing costs and use the extra capital to bring fully integrated off-the-shelf SoC designs to the mass market. This could reduce the cost of SoC implementation for computer manufacturers, reduce the time spent designing new computers, and bring the price of computers and mobile devices down for consumers.

Additionally, integration with Nvidia would allow Arm to keep pace with the wave of innovation from Apple and Samsung, among others. Those companies are making significant strides in the mobile-computing market, designing smaller, faster, and more energy-efficient processors that can be put into just about any form factor. Arm is significantly behind the curve when looking toward the next generation of processing technology. Integrating with Nvidia may be what the company needs to become competitive in the years to come.

One argument against allowing the merger to be completed is that Arm is a critical trading partner with nearly every processor manufacturer in the market, including Nvidia. Up to this point, Arm has not been owned by a single manufacturer and has not had an incentive to prioritize working with one manufacturer over another. Should the merger go through, Nvidia would own Arm, including the IP used by other companies, leading to concern from the FTC and other international regulators that Nvidia will be able to foreclose rivals from critical IP.

There is a strong counterargument, however, that Nvidia would be going against its own interest if it seeks to foreclose the market. Arm-based processors have become a dominant processor technology in recent years, integrated into 90% of the mobile-device market and nearly 34% of the overall computing market. This guaranteed revenue stream is a gold mine for the company, amounting to nearly $2 billion annually.

Closing the door to this revenue stream by revoking access to Arm’s IP would surely come back to bite the newly merged company. Foreclosing IP would have the effect of raising prices and reducing the quantity of processors in the market, but also would likely force the market to shift away from Arm-based processers over time. Arm already has been forced to reduce the cost to license its technology in recent years in order to stave off competition from open-source chip designs that are available without a license. Doing anything that impacted the overall computing market would harm consumers, businesses, and the newly merged company alike. Denying IP to the broader market would likely not pass an internal cost-benefit analysis for the merged entity.

Conclusion

We do not express an opinion on the ultimate antitrust merits of the Arm-Nvidia vertical merger. We note, however, that vertical mergers are typically procompetitive. Furthermore, information in the public record about the proposed consolidation strongly suggests that it could generate substantial efficiencies that would enhance competition in markets for next-generation computers and mobile devices, in turn benefiting consumers. FTC theories of merger-related anticompetitive foreclosure (which at first blush appear somewhat counterintuitive) need to be scrutinized carefully in light of specific facts, and should be assessed with a jaundiced eye in light of the powerful efficiency arguments in favor of the Arm-Nvidia merger.

Recent antitrust forays on both sides of the Atlantic have unfortunate echoes of the oldie-but-baddie “efficiencies offense” that once plagued American and European merger analysis (and, more broadly, reflected a “big is bad” theory of antitrust). After a very short overview of the history of merger efficiencies analysis under American and European competition law, we briefly examine two current enforcement matters “on both sides of the pond” that impliedly give rise to such a concern. Those cases may regrettably foreshadow a move by enforcers to downplay the importance of efficiencies, if not openly reject them.

Background: The Grudging Acceptance of Merger Efficiencies

Not long ago, economically literate antitrust teachers in the United States enjoyed poking fun at such benighted 1960s Supreme Court decisions as Procter & Gamble (following in the wake of Brown Shoe andPhiladelphia National Bank). Those holdings—which not only rejected efficiencies justifications for mergers, but indeed “treated efficiencies more as an offense”—seemed a thing of the past, put to rest by the rise of an economic approach to antitrust. Several early European Commission merger-control decisions also arguably embraced an “efficiencies offense.”  

Starting in the 1980s, the promulgation of increasingly economically sophisticated merger guidelines in the United States led to the acceptance of efficiencies (albeit less then perfectly) as an important aspect of integrated merger analysis. Several practitioners have claimed, nevertheless, that “efficiencies are seldom credited and almost never influence the outcome of mergers that are otherwise deemed anticompetitive.” Commissioner Christine Wilson has argued that the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) still have work to do in “establish[ing] clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.”

In its first few years of merger review, which was authorized in 1989, the European Commission was hostile to merger-efficiency arguments.  In 2004, however, the EC promulgated horizontal merger guidelines that allow for the consideration of efficiencies, but only if three cumulative conditions (consumer benefit, merger specificity, and verifiability) are satisfied. A leading European competition practitioner has characterized several key European Commission merger decisions in the last decade as giving rather short shrift to efficiencies. In light of that observation, the practitioner has advocated that “the efficiency offence theory should, once again, be repudiated by the Commission, in order to avoid deterring notifying parties from bringing forward perfectly valid efficiency claims.”

In short, although the actual weight enforcers accord to efficiency claims is a matter of debate, efficiency justifications are cognizable, subject to constraints, as a matter of U.S. and European Union merger-enforcement policy. Whether that will remain the case is, unfortunately, uncertain, given DOJ and FTC plans to revise merger guidelines, as well as EU talk of convergence with U.S. competition law.

Two Enforcement Matters with ‘Efficiencies Offense’ Overtones

Two Facebook-related matters currently before competition enforcers—one in the United States and one in the United Kingdom—have implications for the possible revival of an antitrust “efficiencies offense” as a “respectable” element of antitrust policy. (I use the term Facebook to reference both the platform company and its corporate parent, Meta.)

FTC v. Facebook

The FTC’s 2020 federal district court monopolization complaint against Facebook, still in the motion to dismiss the amended complaint phase (see here for an overview of the initial complaint and the judge’s dismissal of it), rests substantially on claims that Facebook’s acquisitions of Instagram and WhatsApp harmed competition. As Facebook points out in its recent reply brief supporting its motion to dismiss the FTC’s amended complaint, Facebook appears to be touting merger-related efficiencies in critiquing those acquisitions. Specifically:

[The amended complaint] depends on the allegation that Facebook’s expansion of both Instagram and WhatsApp created a “protective ‘moat’” that made it harder for rivals to compete because Facebook operated these services at “scale” and made them attractive to consumers post-acquisition. . . . The FTC does not allege facts that, left on their own, Instagram and WhatsApp would be less expensive (both are free; Facebook made WhatsApp free); or that output would have been greater (their dramatic expansion at “scale” is the linchpin of the FTC’s “moat” theory); or that the products would be better in any specific way.

The FTC’s concerns about a scale-based merger-related output expansion that benefited consumers and thereby allegedly enhanced Facebook’s market position eerily echoes the commission’s concerns in Procter & Gamble that merger-related cost-reducing joint efficiencies in advertising had an anticompetitive “entrenchment” effect. Both positions, in essence, characterize output-increasing efficiencies as harmful to competition: in other words, as “efficiencies offenses.”

UK Competition and Markets Authority (CMA) v. Facebook

The CMA announced Dec. 1 that it had decided to block retrospectively Facebook’s 2020 acquisition of Giphy, which is “a company that provides social media and messaging platforms with animated GIF images that users can embed in posts and messages. . . .  These platforms license the use of Giphy for its users.”

The CMA theorized that Facebook could harm competition by (1) restricting access to Giphy’s digital libraries to Facebook’s competitors; and (2) prevent Giphy from developing into a potential competitor to Facebook’s display advertising business.

As a CapX analysis explains, the CMA’s theory of harm to competition, based on theoretical speculation, is problematic. First, a behavioral remedy short of divestiture, such as requiring Facebook to maintain open access to its gif libraries, would deal with the threat of restricted access. Indeed, Facebook promised at the time of the acquisition that Giphy would maintain its library and make it widely available. Second, “loss of a single, relatively small, potential competitor out of many cannot be counted as a significant loss for competition, since so many other potential and actual competitors remain.” Third, given the purely theoretical and questionable danger to future competition, the CMA “has blocked this deal on relatively speculative potential competition grounds.”

Apart from the weakness of the CMA’s case for harm to competition, the CMA appears to ignore a substantial potential dynamic integrative efficiency flowing from Facebook’s acquisition of Giphy. As David Teece explains:

Facebook’s acquisition of Giphy maintained Giphy’s assets and furthered its innovation in Facebook’s ecosystem, strengthening that ecosystem in competition with others; and via Giphy’s APIs, strengthening the ecosystems of other service providers as well.

There is no evidence that CMA seriously took account of this integrative efficiency, which benefits consumers by offering them a richer experience from Facebook and its subsidiary Instagram, and which spurs competing ecosystems to enhance their offerings to consumers as well. This is a failure to properly account for an efficiency. Moreover, to the extent that the CMA viewed these integrative benefits as somehow anticompetitive (to the extent that it enhanced Facebook’s competitive position) the improvement of Facebook’s ecosystem could have been deemed a type of “efficiencies offense.”

Are the Facebook Cases Merely Random Straws in the Wind?

It might appear at first blush to be reading too much into the apparent slighting of efficiencies in the two current Facebook cases. Nevertheless, recent policy rhetoric suggests that economic efficiencies arguments (whose status was tenuous at enforcement agencies to begin with) may actually be viewed as “offensive” by the new breed of enforcers.

In her Sept. 22 policy statement on “Vision and Priorities for the FTC,” Chair Lina Khan advocated focusing on the possible competitive harm flowing from actions of “gatekeepers and dominant middlemen,” and from “one-sided [vertical] contract provisions” that are “imposed by dominant firms.” No suggestion can be found in the statement that such vertical relationships often confer substantial benefits on consumers. This hints at a new campaign by the FTC against vertical restraints (as opposed to an emphasis on clearly welfare-inimical conduct) that could discourage a wide range of efficiency-producing contracts.

Chair Khan also sponsored the FTC’s July 2021 rescission of its Section 5 Policy Statement on Unfair Methods of Competition, which had emphasized the primacy of consumer welfare as the guiding principle underlying FTC antitrust enforcement. A willingness to set aside (or place a lower priority on) consumer welfare considerations suggests a readiness to ignore efficiency justifications that benefit consumers.

Even more troubling, a direct attack on the consideration of efficiencies is found in the statement accompanying the FTC’s September 2021 withdrawal of the 2020 Vertical Merger Guidelines:

The statement by the FTC majority . . . notes that the 2020 Vertical Merger Guidelines had improperly contravened the Clayton Act’s language with its approach to efficiencies, which are not recognized by the statute as a defense to an unlawful merger. The majority statement explains that the guidelines adopted a particularly flawed economic theory regarding purported pro-competitive benefits of mergers, despite having no basis of support in the law or market reality.

Also noteworthy is Khan’s seeming interest (found in her writings here, here, and here) in reviving Robinson-Patman Act enforcement. What’s worse, President Joe Biden’s July 2021 Executive Order on Competition explicitly endorses FTC investigation of “retailers’ practices on the conditions of competition in the food industries, including any practices that may violate [the] Robinson-Patman Act” (emphasis added). Those troubling statements from the administration ignore the widespread scholarly disdain for Robinson-Patman, which is almost unanimously viewed as an attack on efficiencies in distribution. For example, in recommending the act’s repeal in 2007, the congressionally established Antitrust Modernization Commission stressed that the act “protects competitors against competition and punishes the very price discounting and innovation and distribution methods that the antitrust otherwise encourage.”

Finally, newly confirmed Assistant Attorney General for Antitrust Jonathan Kanter (who is widely known as a Big Tech critic) has expressed his concerns about the consumer welfare standard and the emphasis on economics in antitrust analysis. Such concerns also suggest, at least by implication, that the Antitrust Division under Kanter’s leadership may manifest a heightened skepticism toward efficiencies justifications.

Conclusion

Recent straws in the wind suggest that an anti-efficiencies hay pile is in the works. Although antitrust agencies have not yet officially rejected the consideration of efficiencies, nor endorsed an “efficiencies offense,” the signs are troubling. Newly minted agency leaders’ skepticism toward antitrust economics, combined with their de-emphasis of the consumer welfare standard and efficiencies (at least in the merger context), suggest that even strongly grounded efficiency explanations may be summarily rejected at the agency level. In foreign jurisdictions, where efficiencies are even less well-established, and enforcement based on mere theory (as opposed to empiricism) is more widely accepted, the outlook for efficiencies stories appears to be no better.     

One powerful factor, however, should continue to constrain the anti-efficiencies movement, at least in the United States: the federal courts. As demonstrated most recently in the 9th U.S. Circuit Court of Appeals’ FTC v. Qualcomm decision, American courts remain committed to insisting on empirical support for theories of harm and on seriously considering business justifications for allegedly suspect contractual provisions. (The role of foreign courts in curbing prosecutorial excesses not grounded in economics, and in weighing efficiencies, depends upon the jurisdiction, but in general such courts are far less of a constraint on enforcers than American tribunals.)

While the DOJ and FTC (and, perhaps to a lesser extent, foreign enforcers) will have to keep the judiciary in mind in deciding to bring enforcement actions, the denigration of efficiencies by the agencies still will have an unfortunate demonstration effect on the private sector. Given the cost (both in resources and in reputational capital) associated with antitrust investigations, and the inevitable discounting for the risk of projects caught up in such inquiries, a publicly proclaimed anti-efficiencies enforcement philosophy will do damage. On the margin, it will lead businesses to introduce fewer efficiency-seeking improvements that could be (wrongly) characterized as “strengthening” or “entrenching” market dominance. Such business decisions, in turn, will be welfare-inimical; they will deny consumers the benefit of efficiencies-driven product and service enhancements, and slow the rate of business innovation.

As such, it is to be hoped that, upon further reflection, U.S. and foreign competition enforcers will see the light and publicly proclaim that they will fully weigh efficiencies in analyzing business conduct. The “efficiencies offense” was a lousy tune. That “oldie-but-baddie” should not be replayed.

[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]

Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.

Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed., strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.

I will highlight five areas of concern with the DMU proposal.

I. Chilling Effects

The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.

Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.

II. SMS Designations Should Not Apply to the Whole Firm

The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.

Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.

Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.

III. The DMU Cannot Be Evidence-based Given its Goals and Objectives

The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.”[1] DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency.[2] Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.

Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.

Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.

Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.

IV. Speed of Proposals

Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.

As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.

But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.

The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar.  Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.

V. Antitrust Is Not Always the Answer

Although one can hardly disagree with Mr. Coscelli’s premise that the digital economy raises new legal questions and practical challenges, it is far from clear that competition law is the answer to them all. Some commentators of late are proposing to use competition law to solve consumer protection and even labor market problems. Unfortunately, this theme also recurs in Mr. Coscelli’s lecture. He discusses concerns with data privacy and fair and reasonable contract terms, but those have long been the province of consumer protection and contract law; a government does not need to step in and regulate all realms of activity by digital firms and call it competition law. Nor is there reason to confine needed protections of data privacy or fair terms of use to SMS firms.

Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal   to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.

Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.

There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.

Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.

In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.


[1] See Digital markets Taskforce, A new pro-competition regime for digital markets, at 22, Dec. 2020, available at: https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/Digital_Taskforce_-_Advice.pdf; Oliver Dowden & Kwasi Kwarteng, A New Pro-competition Regime for Digital Markets, July 2021, available from: https://www.gov.uk/government/consultations/a-new-pro-competition-regime-for-digital-markets, at ¶ 27.

[2] Sam Bowman, Sam Dumitriu & Aria Babu, Conflicting Missions:The Risks of the Digital Markets Unit to Competition and Innovation, Int’l Center for L. & Econ., June 2021, at 13.

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.