Archives For DOJ Antitrust Division

Biden administration enforcers at the U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC) have prioritized labor-market monopsony issues for antitrust scrutiny (see, for example, here and here). This heightened interest comes in light of claims that labor markets are highly concentrated and are rife with largely neglected competitive problems that depress workers’ income. Such concerns are reflected in a March 2022 U.S. Treasury Department report on “The State of Labor Market Competition.”

Monopsony is the “flip side” of monopoly and U.S. antitrust law clearly condemns agreements designed to undermine the “buyer side” competitive process (see, for example, this U.S. government submission to the OECD). But is a special new emphasis on labor markets warranted, given that antitrust enforcers ideally should seek to allocate their scarce resources to the most pressing (highest valued) areas of competitive concern?

A May 2022 Information Technology & Innovation (ITIF) study from ITIF Associate Director (and former FTC economist) Julie Carlson indicates that the degree of emphasis the administration’s antitrust enforcers are placing on labor issues may be misplaced. In particular, the ITIF study debunks the Treasury report’s findings of high levels of labor-market concentration and the claim that workers face a “decrease in wages [due to labor market power] at roughly 20 percent relative to the level in a fully competitive market.” Furthermore, while noting the importance of DOJ antitrust prosecutions of hard-core anticompetitive agreements among employers (wage-fixing and no-poach agreements), the ITIF report emphasizes policy reforms unrelated to antitrust as key to improving workers’ lot.

Key takeaways from the ITIF report include:

  • Labor markets are not highly concentrated. Local labor-market concentration has been declining for decades, with the most concentrated markets seeing the largest declines.
  • Labor-market power is largely due to labor-market frictions, such as worker preferences, search costs, bargaining, and occupational licensing, rather than concentration.
  • As a case study, changes in concentration in the labor market for nurses have little to no effect on wages, whereas nurses’ preferences over job location are estimated to lead to wage markdowns of 50%.
  • Firms are not profiting at the expense of workers. The decline in the labor share of national income is primarily due to rising home values, not increased labor-market concentration.
  • Policy reform should focus on reducing labor-market frictions and strengthening workers’ ability to collectively bargain. Policies targeting concentration are misguided and will be ineffective at improving outcomes for workers.

The ITIF report also throws cold water on the notion of emphasizing labor-market issues in merger reviews, which was teed up in the January 2022 joint DOJ/FTC request for information (RFI) on merger enforcement. The ITIF report explains:

Introducing the evaluation of labor market effects unnecessarily complicates merger review and needlessly ties up agency resources at a time when the agencies are facing severe resource constraints.48 As discussed previously, labor markets are not highly concentrated, nor is labor market concentration a key factor driving down wages.

A proposed merger that is reportable to the agencies under the Hart-Scott-Rodino Act and likely to have an anticompetitive effect in a relevant labor market is also likely to have an anticompetitive effect in a relevant product market. … Evaluating mergers for labor market effects is unnecessary and costly for both firms and the agencies. The current merger guidelines adequately address competition concerns in input markets, so any contemplated revision to the guidelines should not incorporate a “framework to analyze mergers that may lessen competition in labor markets.” [Citation to Request for Information on Merger Enforcement omitted.]

In sum, the administration’s recent pronouncements about highly anticompetitive labor markets that have resulted in severely underpaid workers—used as the basis to justify heightened antitrust emphasis on labor issues—appear to be based on false premises. As such, they are a species of government misinformation, which, if acted upon, threatens to misallocate scarce enforcement resources and thereby undermine efficient government antitrust enforcement. What’s more, an unnecessary overemphasis on labor-market antitrust questions could impose unwarranted investigative costs on companies and chill potentially efficient business transactions. (Think of a proposed merger that would reduce production costs and benefit consumers but result in a workforce reduction by the merged firm.)

Perhaps the administration will take heed of the ITIF report and rethink its plans to ramp up labor-market antitrust-enforcement initiatives. Promoting pro-market regulatory reforms that benefit both labor and consumers (for instance, excessive occupational-licensing restrictions) would be a welfare-superior and cheaper alternative to misbegotten antitrust actions.

A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.

It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:

How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?

Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).

When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.

As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.

The Shaky Foundations of Attention Markets Theory

Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.

  • First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
  • Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
  • Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).

There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:

This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”

Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:

But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.

Tim Wu makes roughly the same argument:

The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.

The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.

None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.

First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.

This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.

This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.

Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.

This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.

Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:

An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.

In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.

In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.

The Bait and Switch: Qualitative Indicia

These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:

Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method

Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.

This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”

This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.

A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences. 

There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching. 

The Way Forward

The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.

As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.

Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.

Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:

The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.

Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.

In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.

Responding to a new draft policy statement from the U.S. Patent & Trademark Office (USPTO), the National Institute of Standards and Technology (NIST), and the U.S. Department of Justice, Antitrust Division (DOJ) regarding remedies for infringement of standard-essential patents (SEPs), a group of 19 distinguished law, economics, and business scholars convened by the International Center for Law & Economics (ICLE) submitted comments arguing that the guidance would improperly tilt the balance of power between implementers and inventors, and could undermine incentives for innovation.

As explained in the scholars’ comments, the draft policy statement misunderstands many aspects of patent and antitrust policy. The draft notably underestimates the value of injunctions and the circumstances in which they are a necessary remedy. It also overlooks important features of the standardization process that make opportunistic behavior much less likely than policymakers typically recognize. These points are discussed in even more detail in previous work by ICLE scholars, including here and here.

These first-order considerations are only the tip of the iceberg, however. Patent policy has a huge range of second-order effects that the draft policy statement and policymakers more generally tend to overlook. Indeed, reducing patent protection has more detrimental effects on economic welfare than the conventional wisdom typically assumes. 

The comments highlight three important areas affected by SEP policy that would be undermined by the draft statement. 

  1. First, SEPs are established through an industry-wide, collaborative process that develops and protects innovations considered essential to an industry’s core functioning. This process enables firms to specialize in various functions throughout an industry, rather than vertically integrate to ensure compatibility. 
  2. Second, strong patent protection, especially of SEPs, boosts startup creation via a broader set of mechanisms than is typically recognized. 
  3. Finally, strong SEP protection is essential to safeguard U.S. technology leadership and sovereignty. 

As explained in the scholars’ comments, the draft policy statement would be detrimental on all three of these dimensions. 

To be clear, the comments do not argue that addressing these secondary effects should be a central focus of patent and antitrust policy. Instead, the point is that policymakers must deal with a far more complex set of issues than is commonly recognized; the effects of SEP policy aren’t limited to the allocation of rents among inventors and implementers (as they are sometimes framed in policy debates). Accordingly, policymakers should proceed with caution and resist the temptation to alter by fiat terms that have emerged through careful negotiation among inventors and implementers, and which have been governed for centuries by the common law of contract. 

Collaborative Standard-Setting and Specialization as Substitutes for Proprietary Standards and Vertical Integration

Intellectual property in general—and patents, more specifically—is often described as a means to increase the monetary returns from the creation and distribution of innovations. While this is undeniably the case, this framing overlooks the essential role that IP also plays in promoting specialization throughout the economy.

As Ronald Coase famously showed in his Nobel-winning work, firms must constantly decide whether to perform functions in-house (by vertically integrating), or contract them out to third parties (via the market mechanism). Coase concluded that these decisions hinge on whether the transaction costs associated with the market mechanism outweigh the cost of organizing production internally. Decades later, Oliver Williamson added a key finding to this insight. He found that among the most important transaction costs that firms encounter are those that stem from incomplete contracts and the scope for opportunistic behavior they entail.

This leads to a simple rule of thumb: as the scope for opportunistic behavior increases, firms are less likely to use the market mechanism and will instead perform tasks in-house, leading to increased vertical integration.

IP plays a key role in this process. Patents drastically reduce the transaction costs associated with the transfer of knowledge. This gives firms the opportunity to develop innovations collaboratively and without fear that trading partners might opportunistically appropriate their inventions. In turn, this leads to increased specialization. As Robert Merges observes

Patents facilitate arms-length trade of a technology-intensive input, leading to entry and specialization.

More specifically, it is worth noting that the development and commercialization of inventions can lead to two important sources of opportunistic behavior: patent holdup and patent holdout. As the assembled scholars explain in their comments, while patent holdup has drawn the lion’s share of policymaker attention, empirical and anecdotal evidence suggest that holdout is the more salient problem.

Policies that reduce these costs—especially patent holdout—in a cost-effective manner are worthwhile, with the immediate result that technologies are more widely distributed than would otherwise be the case. Inventors also see more intense and extensive incentives to produce those technologies in the first place.

The Importance of Intellectual Property Rights for Startup Activity

Strong patent rights are essential to monetize innovation, thus enabling new firms to gain a foothold in the marketplace. As the scholars’ comments explain, this is even more true for startup companies. There are three main reasons for this: 

  1. Patent rights protected by injunctions prevent established companies from simply copying innovative startups, with the expectation that they will be able to afford court-set royalties; 
  2. Patent rights can be the basis for securitization, facilitating access to startup funding; and
  3. Patent rights drive venture capital (VC) investment.

While point (1) is widely acknowledged, many fail to recognize it is particularly important for startup companies. There is abundant literature on firms’ appropriability mechanisms (these are essentially the strategies firms employ to prevent rivals from copying their inventions). The literature tells us that patent protection is far from the only strategy firms use to protect their inventions (see. e.g., here, here and here). 

The alternative appropriability mechanisms identified by these studies tend to be easier to implement for well-established firms. For instance, many firms earn returns on their inventions by incorporating them into physical products that cannot be reverse engineered. This is much easier for firms that already have a large industry presence and advanced manufacturing capabilities.  In contrast, startup companies—almost by definition—must outsource production.

Second, property rights could drive startup activity through the collateralization of IP. By offering security interests in patents, trademarks, and copyrights, startups with little or no tangible assets can obtain funding without surrendering significant equity. As Gaétan de Rassenfosse puts it

SMEs can leverage their IP to facilitate R&D financing…. [P]atents materialize the value of knowledge stock: they codify the knowledge and make it tradable, such that they can be used as collaterals. Recent theoretical evidence by Amable et al. (2010) suggests that a systematic use of patents as collateral would allow a high growth rate of innovations despite financial constraints.

Finally, there is reason to believe intellectual-property protection is an important driver of venture capital activity. Beyond simply enabling firms to earn returns on their investments, patents might signal to potential investors that a company is successful and/or valuable. Empirical research by Hsu and Ziedonis, for instance, supports this hypothesis

[W]e find a statistically significant and economically large effect of patent filings on investor estimates of start-up value…. A doubling in the patent application stock of a new venture [in] this sector is associated with a 28 percent increase in valuation, representing an upward funding-round adjustment of approximately $16.8 million for the average start-up in our sample.

In short, intellectual property can stimulate startup activity through various mechanisms. There is thus a sense that, at the margin, weakening patent protection will make it harder for entrepreneurs to embark on new business ventures.

The Role of Strong SEP Rights in Guarding Against China’s ‘Cyber Great Power’ Ambitions 

The United States, due in large measure to its strong intellectual-property protections, is a nation of innovators, and its production of IP is one of its most important comparative advantages. 

IP and its legal protections become even more important, however, when dealing with international jurisdictions, like China, that don’t offer similar levels of legal protection. By making it harder for patent holders to obtain injunctions, licensees and implementers gain the advantage in the short term, because they are able to use patented technology without having to engage in negotiations to pay the full market price. 

In the case of many SEPs—particularly those in the telecommunications sector—a great many patent holders are U.S.-based, while the lion’s share of implementers are Chinese. The anti-injunction policy espoused in the draft policy statement thus amounts to a subsidy to Chinese infringers of U.S. technology.

At the same time, China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights.

This is part of the Chinese government’s larger approach to industrial policy, which seeks to expand Chinese power in international trade negotiations and in global standards bodies. As one Chinese Communist Party official put it

Standards are the commanding heights, the right to speak, and the right to control. Therefore, the one who obtains the standards gains the world.

Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.

The scholars convened by ICLE were not alone in voicing these fears. David Teece (also a signatory to the ICLE-convened comments), for example, surmises in his comments that: 

The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation…. Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.

Similarly, comments from the Center for Strategic and International Studies (signed by, among others, former USPTO Director Anrei Iancu, former NIST Director Walter Copan, and former Deputy Secretary of Defense John Hamre) argue that the draft policy statement would benefit Chinese firms at U.S. firms’ expense:

What is more, the largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.

With Chinese authorities joining standardization bodies and increasingly claiming jurisdiction over F/RAND disputes, there should be careful reevaluation of the ways the draft policy statement would further weaken the United States’ comparative advantage in IP-dependent technological innovation. 

Conclusion

In short, weakening patent protection could have detrimental ramifications that are routinely overlooked by policymakers. These include increasing inventors’ incentives to vertically integrate rather than develop innovations collaboratively; reducing startup activity (especially when combined with antitrust enforcers’ newfound proclivity to challenge startup acquisitions); and eroding America’s global technology leadership, particularly with respect to China.

For these reasons (and others), the text of the draft policy statement should be reconsidered and either revised substantially to better reflect these concerns or withdrawn entirely. 

The signatories to the comments are:

Alden F. AbbottSenior Research Fellow, Mercatus Center
George Mason University
Former General Counsel, U.S. Federal Trade Commission
Jonathan BarnettTorrey H. Webb Professor of Law
University of Southern California
Ronald A. CassDean Emeritus, School of Law
Boston University
Former Commissioner and Vice-Chairman, U.S. International Trade Commission
Giuseppe ColangeloJean Monnet Chair in European Innovation Policy and Associate Professor of Competition Law & Economics
University of Basilicata and LUISS (Italy)
Richard A. EpsteinLaurence A. Tisch Professor of Law
New York University
Bowman HeidenExecutive Director, Tusher Initiative at the Haas School of Business
University of California, Berkeley
Justin (Gus) HurwitzProfessor of Law
University of Nebraska
Thomas A. LambertWall Chair in Corporate Law and Governance
University of Missouri
Stan J. LiebowitzAshbel Smith Professor of Economics
University of Texas at Dallas
John E. LopatkaA. Robert Noll Distinguished Professor of Law
Penn State University
Keith MallinsonFounder and Managing Partner
WiseHarbor
Geoffrey A. MannePresident and Founder
International Center for Law & Economics
Adam MossoffProfessor of Law
George Mason University
Kristen Osenga Austin E. Owen Research Scholar and Professor of Law
University of Richmond
Vernon L. SmithGeorge L. Argyros Endowed Chair in Finance and Economics
Chapman University
Nobel Laureate in Economics (2002)
Daniel F. SpulberElinor Hobbs Distinguished Professor of International Business
Northwestern University
David J. TeeceThomas W. Tusher Professor in Global Business
University of California, Berkeley
Joshua D. WrightUniversity Professor of Law
George Mason University
Former Commissioner, U.S. Federal Trade Commission
John M. YunAssociate Professor of Law
George Mason University
Former Acting Deputy Assistant Director, Bureau of Economics, U.S. Federal Trade Commission 

The Jan. 18 Request for Information on Merger Enforcement (RFI)—issued jointly by the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ)—sets forth 91 sets of questions (subsumed under 15 headings) that provide ample opportunity for public comment on a large range of topics.

Before chasing down individual analytic rabbit holes related to specific questions, it would be useful to reflect on the “big picture” policy concerns raised by this exercise (but not hinted at in the questions). Viewed from a broad policy perspective, the RFI initiative risks undermining the general respect that courts have accorded merger guidelines over the years, as well as disincentivizing economically beneficial business consolidations.

Policy concerns that flow from various features of the RFI, which could undermine effective merger enforcement, are highlighted below. These concerns counsel against producing overly detailed guidelines that adopt a merger-skeptical orientation.

The RFI Reflects the False Premise that Competition is Declining in the United States

The FTC press release that accompanied the RFI’s release made clear that a supposed weakening of competition under the current merger-guidelines regime is a key driver of the FTC and DOJ interest in new guidelines:

Today, the Federal Trade Commission (FTC) and the Justice Department’s Antitrust Division launched a joint public inquiry aimed at strengthening enforcement against illegal mergers. Recent evidence indicates that many industries across the economy are becoming more concentrated and less competitive – imperiling choice and economic gains for consumers, workers, entrepreneurs, and small businesses.

This premise is not supported by the facts. Based on a detailed literature review, Chapter 6 of the 2020 Economic Report of the President concluded that “the argument that the U.S. economy is suffering from insufficient competition is built on a weak empirical foundation and questionable assumptions.” More specifically, the 2020 Economic Report explained:

Research purporting to document a pattern of increasing concentration and increasing markups uses data on segments of the economy that are far too broad to offer any insights about competition, either in specific markets or in the economy at large. Where data do accurately identify issues of concentration or supercompetitive profits, additional analysis is needed to distinguish between alternative explanations, rather than equating these market indicators with harmful market power.

Soon to-be-published quantitative research by Robert Kulick of NERA Economic Consulting and the American Enterprise Institute, presented at the Jan. 26 Mercatus Antitrust Forum, is consistent with the 2020 Economic Report’s findings. Kulick stressed that there was no general trend toward increasing industrial concentration in the U.S. economy from 2002 to 2017. In particular, industrial concentration has been declining since 2007; the Herfindahl–Hirschman index (HHI) for manufacturing has declined significantly since 2002; and the economywide four-firm concentration ratio (CR4) in 2017 was approximately the same as in 2002. 

Even in industries where concentration may have risen, “the evidence does not support claims that concentration is persistent or harmful.” In that regard, Kulick’s research finds that higher-concentration industries tend to become less concentrated, while lower-concentration industries tend to become more concentrated over time; increases in industrial concentration are associated with economic growth and job creation, particularly for high-growth industries; and rising industrial concentration may be driven by increasing market competition.

In short, the strongest justification for issuing new merger guidelines is based on false premises: an alleged decline in competition within the Unites States. Given this reality, the adoption of revised guidelines designed to “ratchet up” merger enforcement would appear highly questionable.

The RFI Strikes a Merger-Skeptical Tone Out of Touch with Modern Mainstream Antitrust Scholarship

The overall tone of the RFI reflects a skeptical view of the potential benefits of mergers. It ignores overarching beneficial aspects of mergers, which include reallocating scarce resources to higher-valued uses (through the market for corporate control) and realizing standard efficiencies of various sorts (including cost-based efficiencies and incentive effects, such as the elimination of double marginalization through vertical integration). Mergers also generate benefits by bringing together complementary assets and by generating synergies of various sorts, including the promotion of innovation and scaling up the fruits of research and development. (See here, for example.)

What’s more, as the Organisation for Economic Co-operation and Development (OECD) has explained, “[e]vidence suggests that vertical mergers are generally pro-competitive, as they are driven by efficiency-enhancing motives such as improving vertical co-ordination and realizing economies of scope.”

Given the manifold benefits of mergers in general, the negative and merger-skeptical tone of the RFI is regrettable. It not only ignores sound economics, but it is at odds with recent pronouncements by the FTC and DOJ. Notably, the 2010 DOJ-FTC Horizontal Merger Guidelines (issued by Obama administration enforcers) struck a neutral tone. Those guidelines recognized the duty to challenge anticompetitive mergers while noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (“[t]he Agencies seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral”). The same neutral approach is found in the 2020 DOJ-FTC Vertical Merger Guidelines (“the Agencies use a consistent set of facts and assumptions to evaluate both the potential competitive harm from a vertical merger and the potential benefits to competition”).

The RFI, however, expresses no concern about unnecessary government interference, and strongly emphasizes the potential shortcomings of the existing guidelines in questioning whether they “adequately equip enforcers to identify and proscribe unlawful, anticompetitive mergers.” Merger-skepticism is also reflected throughout the RFI’s 91 sets of questions. A close reading reveals that they are generally phrased in ways that implicitly assume competitive problems or reject potential merger justifications.

For example, the questions addressing efficiencies, under RFI heading 14, casts efficiencies in a generally negative light. Thus, the RFI asks whether “the [existing] guidelines’ approach to efficiencies [is] consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts,” citing the statement in FTC v. Procter & Gamble (1967) that “[p]ossible economies cannot be used as a defense to illegality.”

The view that antitrust disfavors mergers that enhance efficiencies (the “efficiencies offense”) has been roundly rejected by mainstream antitrust scholarship (see, for example, here, here, and here). It may be assumed that today’s Supreme Court (which has deemed consumer welfare to be the lodestone of antitrust enforcement since Reiter v. Sonotone (1979)) would give short shrift to an “efficiencies offense” justification for a merger challenge.

Another efficiencies-related question, under RFI heading 14.d, may in application fly in the face of sound market-oriented economics: “Where a merger is expected to generate cost savings via the elimination of ‘excess’ or ‘redundant’ capacity or workers, should the guidelines treat these savings as cognizable ‘efficiencies’?”

Consider a merger that generates synergies and thereby expands and/or raises the quality of goods and services produced with reduced capacity and fewer workers. This merger would allow these resources to be allocated to higher-valued uses elsewhere in the economy, yielding greater economic surplus for consumers and producers. But there is the risk that such a merger could be viewed unfavorably under new merger guidelines that were revised in light of this question. (Although heading 14.d includes a separate question regarding capacity reductions that have the potential to reduce supply resilience or product or service quality, it is not stated that this provision should be viewed as a limitation on the first sentence.)

The RFI’s discussion of topics other than efficiencies similarly sends the message that existing guidelines are too “pro-merger.” Thus, for example, under RFI heading 5 (“presumptions”), one finds the rhetorical question: “[d]o the [existing] guidelines adequately identify mergers that are presumptively unlawful under controlling case law?”

This question answers itself, by citing to the Philadelphia National Bank (1963) statement that “[w]ithout attempting to specify the smallest market share which would still be considered to threaten undue concentration, we are clear that 30% presents that threat.” This statement predates all of the merger guidelines and is out of step with the modern economic analysis of mergers, which the existing guidelines embody. It would, if taken seriously, threaten a huge number of proposed mergers that, until now, have not been subject to second-request review by the DOJ and FTC. As Judge Douglas Ginsburg and former Commissioner Joshua Wright have explained:

The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. . . . The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.

By inviting support for PNB-style thinking, RFI heading 5’s lead question effectively rejects the economic effects-based analysis that has been central to agency merger analysis for decades. Guideline revisions that downplay effects in favor of mere concentration would likely be viewed askance by reviewing courts (and almost certainly would be rejected by the Supreme Court, as currently constituted, if the occasion arose).

These particularly striking examples are illustrative of the questioning tone regarding existing merger analysis that permeates the RFI.

New Merger Guidelines, if Issued, Should Not Incorporate the Multiplicity of Issues Embodied in the RFI

The 91 sets of questions in the RFI read, in large part, like a compendium of theoretical harms to the working of markets that might be associated with mergers. While these questions may be of general academic interest, and may shed some light on particular merger investigations, most of them should not be incorporated into guidelines.

As Justice Stephen Breyer has pointed out, antitrust is a legal regime that must account for administrative practicalities. Then-Judge Breyer described the nature of the problem in his 1983 Barry Wright opinion (affirming the dismissal of a Sherman Act Section 2 complaint based on “unreasonably low” prices):

[W]hile technical economic discussion helps to inform the antitrust laws, those laws cannot precisely replicate the economists’ (sometimes conflicting) views. For, unlike economics, law is an administrative system the effects of which depend upon the content of rules and precedents only as they are applied by judges and juries in courts and by lawyers advising their clients. Rules that seek to embody every economic complexity and qualification may well, through the vagaries of administration, prove counter-productive, undercutting the very economic ends they seek to serve.

It follows that any effort to include every theoretical merger-related concern in new merger guidelines would undercut their (presumed) overarching purpose, which is providing useful guidance to the private sector. All-inclusive “guidelines” in reality provide no guidance at all. Faced with a laundry list of possible problems that might prompt the FTC or DOJ to oppose a merger, private parties would face enormous uncertainty, which could deter them from proposing a large number of procompetitive, welfare-enhancing or welfare-neutral consolidations. This would “undercut the very economic ends” of promoting competition that is served by Section 7 enforcement.

Furthermore, all-inclusive merger guidelines could be seen by judges as undermining the rule of law (see here, for example). If DOJ and FTC were able to “pick and choose” at will from an enormously wide array of considerations to justify opposing a proposed merger, they could be seen as engaged in arbitrary enforcement, rather than in a careful weighing of evidence aimed at condemning only anticompetitive transactions. This would be at odds with the promise of fair and dispassionate enforcement found in the 2010 Horizontal Merger Guidelines, namely, to “seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral.”

Up until now, federal courts have virtually always implicitly deferred to (and not questioned) the application of merger-guideline principles by the DOJ and FTC. The agencies have won or lost cases based on courts’ weighing of particular factual and economic evidence, not on whether guideline principles should have been applied by the enforcers.

One would expect courts to react very differently, however, to cases brought in light of ridiculously detailed “guidelines” that did not provide true guidance (particularly if they were heavy on competitive harm possibilities and discounted efficiencies). The agencies’ selective reliance on particular anticompetitive theories could be seen as exercises in arbitrary “pre-cooked” condemnations, not dispassionate enforcement. As such, the courts would tend to be far more inclined to reject (or accord far less deference to) the new guidelines in evaluating agency merger challenges. Even transactions that would have been particularly compelling candidates for condemnation under prior guidelines could be harder to challenge successfully, due to the taint of the new guidelines.

In short, the adoption of highly detailed guidelines that emphasize numerous theories of harm would likely undermine the effectiveness of DOJ and FTC merger enforcement, the precise opposite of what the agencies would have intended.

New Merger Guidelines, if Issued, Should Avoid Relying on Outdated Case Law and Novel Section 7 Theories, and Should Give Due Credit to Economic Efficiencies

The DOJ and FTC could, of course, acknowledge the problem of administrability  and issue more straightforward guideline revisions, of comparable length and detail to prior guidelines. If they choose to do so, they would be well-advised to eschew relying on dated precedents and novel Section 7 theories. They should also give due credit to efficiencies. Seemingly biased guidelines would undermine merger enforcement, not strengthen it.

As discussed above, the RFI’s implicitly favorable references to Philadelphia National Bank and Procter & Gamble are at odds with contemporary economics-based antitrust thinking, which has been accepted by the federal courts. The favorable treatment of those antediluvian holdings, and Brown Shoe Co. v. United States (1962) (another horribly dated case cited multiple times in the RFI), would do much to discredit new guidelines.

In that regard, the suggestion in RFI heading 1 that existing merger guidelines may not “faithfully track the statutory text, legislative history, and established case law around merger enforcement” touts the Brown Shoe and PNB concerns with a “trend toward concentration” and “the danger of subverting congressional intent by permitting a too-broad economic investigation.”

New guidelines that focus on (or even give lip service to) a “trend” toward concentration and eschew overly detailed economic analyses (as opposed, perhaps, to purely concentration-based negative rules of thumb?) would predictably come in for judicial scorn as economically unfounded. Such references would do as much (if not more) to ensure judicial rejection of enforcement-agency guidelines as endless lists of theoretically possible sources of competitive harm, discussed previously.

Of particular concern are those references that implicitly reject the need to consider efficiencies, which is key to modern enlightened merger evaluations. It is ludicrous to believe that a majority of the current Supreme Court would have a merger-analysis epiphany and decide that the RFI’s preferred interventionist reading of Section 7 statutory language and legislative history trumps decades of economically centered consumer-welfare scholarship and agency guidelines.

Herbert Hovenkamp, author of the leading American antitrust treatise and a scholar who has been cited countless times by the Supreme Court, recently put it well (in an article coauthored with Carl Shapiro):

When the FTC investigates vertical and horizontal mergers will it now take the position that efficiencies are irrelevant, even if they are proven? If so, the FTC will face embarrassing losses in court.

Reviewing courts wound no doubt take heed of this statement in assessing any future merger guidelines that rely on dated and discredited cases or that minimize efficiencies.

New Guidelines, if Issued, Should Give Due Credit to Efficiencies

Heading 14 of the RFI—listing seven sets of questions that deal with efficiencies—is in line with the document’s implicitly negative portrayal of mergers. The heading begins inauspiciously, with a question that cites Procter & Gamble in suggesting that the current guidelines’ approach to efficiencies is “[in]consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts.” As explained above, such an anti-efficiencies reference would be viewed askance by most, if not all, reviewing judges.

Other queries in heading 14 also view efficiencies as problematic. They suggest that efficiency claims should be treated negatively because efficiency claims are not always realized after the fact. But merger activity is a private-sector search process, and the ability to predict ex post effects with perfect accuracy is an inevitable part of market activity. Using such a natural aspect of markets as an excuse to ignore efficiencies would prevent many economically desirable consolidations from being achieved.

Furthermore, the suggestion under heading 14 that parties should have to show with certainty that cognizable efficiencies could not have been achieved through alternative means asks the impossible. Theoreticians may be able to dream up alternative means by which efficiencies might have been achieved (say, through convoluted contracts), but such constructs may not be practical in real-world settings. Requiring businesses to follow dubious theoretical approaches to achieve legitimate business ends, rather than allowing them to enter into arrangements they favor that appear efficient, would manifest inappropriate government interference in markets. (It would be just another example of the “pretense of knowledge” that Friedrich Hayek brilliantly described in his 1974 Nobel Prize lecture.)

Other questions under heading 14 raise concerns about the lack of discussion of possible “inefficiencies” in current guidelines, and speculate about possible losses of “product or service quality” due to otherwise efficient reductions in physical capacity and employment. Such theoretical musings offer little guidance to the private sector, and further cast in a negative light potential real resource savings.

Rather than incorporate the unhelpful theoretical efficiencies critiques under heading 14, the agencies should consider a more helpful approach to clarifying the evaluation of efficiencies in new guidelines. Such a clarification could be based on Commissioner Christine Wilson’s helpful discussion of merger efficiencies in recent writings (see, for example, here and here). Wilson has appropriately called for the symmetric treatment of both the potential harms and benefits arising from mergers, explaining that “the agencies readily credit harms but consistently approach potential benefits with extreme skepticism.”

She and Joshua Wright have also explained (see here, here, and here) that overly narrow product-market definitions may sometimes preclude consideration of substantial “out-of-market” efficiencies that arise from certain mergers. The consideration of offsetting “out-of-market” efficiencies that greatly outweigh competitive harms might warrant inclusion in new guidelines.

The FTC and DOJ could be heading for a merger-enforcement train wreck if they adopt new guidelines that incorporate the merger-skeptical tone and excruciating level of detail found in the RFI. This approach would yield a lengthy and uninformative laundry list of potential competitive problems that would allow the agencies to selectively pick competitive harm “stories” best adapted to oppose particular mergers, in tension with the rule of law.

Far from “strengthening” merger enforcement, such new guidelines would lead to economically harmful business uncertainty and would severely undermine judicial respect for the federal merger-enforcement process. The end result would be a “lose-lose” for businesses, for enforcers, and for the American economy.

Conclusion

If the agencies enact new guidelines, they should be relatively short and straightforward, designed to give private parties the clearest possible picture of general agency enforcement intentions. In particular, new guidelines should:

  1. Eschew references to dated and discredited case law;
  2. Adopt a neutral tone that acknowledges the beneficial aspects of mergers;
  3. Recognize the duty to challenge anticompetitive mergers, while at the same time noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (consistent with the 2010 Horizontal Merger Guidelines); and
  4. Acknowledge the importance of efficiencies, treating them symmetrically with competitive harm and according appropriate weight to countervailing out-of-market efficiencies (a distinct improvement over existing enforcement policy).

Merger enforcement should continue to be based on fact-based case-specific evaluations, informed by sound economics. Populist nostrums that treat mergers with suspicion and that ignore their beneficial aspects should be rejected. Such ideas are at odds with current scholarly thinking and judicial analysis, and should be relegated to the scrap heap of outmoded and bad public policies.

The leading contribution to sound competition policy made by former Assistant U.S. Attorney General Makan Delrahim was his enunciation of the “New Madison Approach” to patent-antitrust enforcement—and, in particular, to the antitrust treatment of standard essential patent licensing (see, for example, here, here, and here). In short (citations omitted):

The New Madison Approach (“NMA”) advanced by former Assistant Attorney General for Antitrust Makan Delrahim is a simple analytical framework for understanding the interplay between patents and antitrust law arising out of standard setting. A key aspect of the NMA is its rejection of the application of antitrust law to the “hold-up” problem, whereby patent holders demand supposedly supra-competitive licensing fees to grant access to their patents that “read on” a standard – standard essential patents (“SEPs”). This scenario is associated with an SEP holder’s prior commitment to a standard setting organization (“SSO”), that is: if its patented technology is included in a proposed new standard, it will license its patents on fair, reasonable, and non-discriminatory (“FRAND”) terms. “Hold-up” is said to arise subsequently, when the SEP holder reneges on its FRAND commitment and demands that a technology implementer pay higher-than-FRAND licensing fees to access its SEPs.

The NMA has four basic premises that are aimed at ensuring that patent holders have adequate incentives to innovate and create welfare-enhancing new technologies, and that licensees have appropriate incentives to implement those technologies:

1. Hold-up is not an antitrust problem. Accordingly, an antitrust remedy is not the correct tool to resolve patent licensing disputes between SEP-holders and implementers of a standard.

2. SSOs should not allow collective actions by standard-implementers to disfavor patent holders in setting the terms of access to patents that cover a new standard.

3. A fundamental element of patent rights is the right to exclude. As such, SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents, by, for example, seeking injunctions.

4. Unilateral and unconditional decisions not to license a patent should be per se legal.

Delrahim emphasizes that the threat of antitrust liability, specifically treble damages, distorts the incentives associated with good faith negotiations with SSOs over patent inclusion. Contract law, he goes on to note, is perfectly capable of providing an ex post solution to licensing disputes between SEP holders and implementers of a standard. Unlike antitrust law, a contract law framework allows all parties equal leverage in licensing negotiations.

As I have explained elsewhere, the NMA is best seen as a set of policies designed to spark dynamic economic growth:

[P]atented technology serves as a catalyst for the wealth-creating diffusion of innovation. This occurs through numerous commercialization methods; in the context of standardized technologies, the development of standards is a process of discovery. At each [SSO], the process of discussion and negotiation between engineers, businesspersons, and all other relevant stakeholders reveals the relative value of alternative technologies and tends to result in the best patents being integrated into a standard.

The NMA supports this process of discovery and implementation of the best patented technology born of the labors of the innovators who created it. As a result, the NMA ensures SEP valuations that allow SEP holders to obtain an appropriate return for the new economic surplus that results from the commercialization of standard-engendered innovations. It recognizes that dynamic economic growth is fostered through the incentivization of innovative activities backed by patents.

In sum, the NMA seeks to promote innovation by offering incentives for SEP-driven technological improvements. As such, it rejects as ill-founded prior Federal Trade Commission (FTC) litigation settlements and Obama-era U.S. Justice Department (DOJ) Antitrust Division policy statements that artificially favored implementor licensees’ interests over those of SEP licensors (see here).

In light of the NMA, DOJ cooperated with the U.S. Patent and Trademark Office and National Institute of Standards and Technology (NIST) in issuing a 2019 SEP Policy Statement clarifying that an SEP holder’s promise to license a patent on fair, reasonable, and non-discriminatory (FRAND) terms does not bar it from seeking any available remedy for patent infringement, including an injunction. This signaled that SEPs and non-SEP patents enjoy equivalent legal status.

DOJ also issued a 2020 supplement to its 2015 Institute of Electrical and Electronics Engineers (IEEE) business review letter. The 2015 letter had found no legal fault with revised IEEE standard-setting policies that implicitly favored implementers of standardized technology over SEP holders. The 2020 supplement characterized key elements of the 2015 letter as “outdated,” and noted that the anti-SEP bias of that document could “harm competition and chill innovation.”   

Furthermore, DOJ issued a July 2019 Statement of Interest before the 9th U.S. Circuit Court of Appeals in FTC v. Qualcomm, explaining that unilateral and unconditional decisions not to license a patent are legal under the antitrust laws. In October 2020, the 9th Circuit reversed a district court decision and rejected the FTC’s monopolization suit against Qualcomm. The circuit court, among other findings, held that Qualcomm had no antitrust duty to license its SEPs to competitors.

Regrettably, the Biden Administration appears to be close to rejecting the NMA and to reinstituting the anti-strong patents SEP-skeptical views of the Obama administration (see here and here). DOJ already has effectively repudiated the 2020 supplement to the 2015 IEEE letter and the 2019 SEP Policy Statement. Furthermore, written responses to Senate Judiciary Committee questions by assistant attorney general nominee Jonathan Kanter suggest support for renewed antitrust scrutiny of SEP licensing. These developments are highly problematic if one supports dynamic economic growth.

Conclusion

The NMA represents a pro-American, pro-growth innovation policy prescription. Its abandonment would reduce incentives to invest in patents and standard-setting activities, to the detriment of the U.S. economy. Such a development would be particularly unfortunate at a time when U.S. Supreme Court decisions have weakened American patent rights (see here); China is taking steps to strengthen Chinese patents and raise incentives to obtain Chinese patents (see here); and China is engaging in litigation to weaken key U.S. patents and undermine American technological leadership (see here).

The rejection of NMA would also be in tension with the logic of the 5th U.S. Circuit Court of Appeals’ 2021 HTC v. Ericsson decision, which held that the non-discrimination portion of the FRAND commitment required Ericsson to give HTC the same licensing terms as given to larger mobile-device manufacturers. Furthermore, recent important European court decisions are generally consistent with NMA principles (see here).

Given the importance of dynamic competition in an increasingly globalized world economy, Biden administration officials may wish to take a closer look at the economic arguments supporting the NMA before taking final action to condemn it. Among other things, the administration might take note that major U.S. digital platforms, which are the subject of multiple U.S. and foreign antitrust enforcement investigations, tend to firmly oppose strong patents rights. As one major innovation economist recently pointed out:

If policymakers and antitrust gurus are so concerned about stemming the rising power of Big Tech platforms, they should start by first stopping the relentless attack on IP. Without the IP system, only the big and powerful have the privilege to innovate[.]

The American Choice and Innovation Online Act (previously called the Platform Anti-Monopoly Act), introduced earlier this summer by U.S. Rep. David Cicilline (D-R.I.), would significantly change the nature of digital platforms and, with them, the internet itself. Taken together, the bill’s provisions would turn platforms into passive intermediaries, undermining many of the features that make them valuable to consumers. This seems likely to remain the case even after potential revisions intended to minimize the bill’s unintended consequences.

In its current form, the bill is split into two parts that each is dangerous in its own right. The first, Section 2(a), would prohibit almost any kind of “discrimination” by platforms. Because it is so open-ended, lawmakers might end up removing it in favor of the nominally more focused provisions of Section 2(b), which prohibit certain named conduct. But despite being more specific, this section of the bill is incredibly far-reaching and would effectively ban swaths of essential services.

I will address the potential effects of these sections point-by-point, but both elements of the bill suffer from the same problem: a misguided assumption that “discrimination” by platforms is necessarily bad from a competition and consumer welfare point of view. On the contrary, this conduct is often exactly what consumers want from platforms, since it helps to bring order and legibility to otherwise-unwieldy parts of the Internet. Prohibiting it, as both main parts of the bill do, would make the Internet harder to use and less competitive.

Section 2(a)

Section 2(a) essentially prohibits any behavior by a covered platform that would advantage that platform’s services over any others that also uses that platform; it characterizes this preferencing as “discrimination.”

As we wrote when the House Judiciary Committee’s antitrust bills were first announced, this prohibition on “discrimination” is so broad that, if it made it into law, it would prevent platforms from excluding or disadvantaging any product of another business that uses the platform or advantaging their own products over those of their competitors.

The underlying assumption here is that platforms should be like telephone networks: providing a way for different sides of a market to communicate with each other, but doing little more than that. When platforms do do more—for example, manipulating search results to favor certain businesses or to give their own products prominence —it is seen as exploitative “leveraging.”

But consumers often want platforms to be more than just a telephone network or directory, because digital markets would be very difficult to navigate without some degree of “discrimination” between sellers. The Internet is so vast and sellers are often so anonymous that any assistance which helps you choose among options can serve to make it more navigable. As John Gruber put it:

From what I’ve seen over the last few decades, the quality of the user experience of every computing platform is directly correlated to the amount of control exerted by its platform owner. The current state of the ownerless world wide web speaks for itself.

Sometimes, this manifests itself as “self-preferencing” of another service, to reduce additional time spent searching for the information you want. When you search for a restaurant on Google, it can be very useful to get information like user reviews, the restaurant’s phone number, a button on mobile to phone them directly, estimates of how busy it is, and a link to a Maps page to see how to actually get there.

This is, undoubtedly, frustrating for competitors like Yelp, who would like this information not to be there and for users to have to click on either a link to Yelp or a link to Google Maps. But whether it is good or bad for Yelp isn’t relevant to whether it is good for users—and it is at least arguable that it is, which makes a blanket prohibition on this kind of behavior almost inevitably harmful.

If it isn’t obvious why removing this kind of feature would be harmful for users, ask yourself why some users search in Yelp’s app directly for this kind of result. The answer, I think, is that Yelp gives you all the information above that Google does (and sometimes is better, although I tend to trust Google Maps’ reviews over Yelp’s), and it’s really convenient to have all that on the same page. If Google could not provide this kind of “rich” result, many users would probably stop using Google Search to look for restaurant information in the first place, because a new friction would have been added that made the experience meaningfully worse. Removing that option would be good for Yelp, but mainly because it removes a competitor.

If all this feels like stating the obvious, then it should highlight a significant problem with Section 2(a) in the Cicilline bill: it prohibits conduct that is directly value-adding for consumers, and that creates competition for dedicated services like Yelp that object to having to compete with this kind of conduct.

This is true across all the platforms the legislation proposes to regulate. Amazon prioritizes some third-party products over others on the basis of user reviews, rates of returns and complaints, and so on; Amazon provides private label products to fill gaps in certain product lines where existing offerings are expensive or unreliable; Apple pre-installs a Camera app on the iPhone that, obviously, enjoys an advantage over rival apps like Halide.

Some or all of this behavior would be prohibited under Section 2(a) of the Cicilline bill. Combined with the bill’s presumption that conduct must be defended affirmatively—that is, the platform is presumed guilty unless it can prove that the challenged conduct is pro-competitive, which may be very difficult to do—and the bill could prospectively eliminate a huge range of socially valuable behavior.

Supporters of the bill have already been left arguing that the law simply wouldn’t be enforced in these cases of benign discrimination. But this would hardly be an improvement. It would mean the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) have tremendous control over how these platforms are built, since they could challenge conduct in virtually any case. The regulatory uncertainty alone would complicate the calculus for these firms as they refine, develop, and deploy new products and capabilities. 

So one potential compromise might be to do away with this broad-based rule and proscribe specific kinds of “discriminatory” conduct instead. This approach would involve removing Section 2(a) from the bill but retaining Section 2(b), which enumerates 10 practices it deems to be “other discriminatory conduct.” This may seem appealing, as it would potentially avoid the worst abuses of the broad-based prohibition. In practice, however, it would carry many of the same problems. In fact, many of 2(b)’s provisions appear to go even further than 2(a), and would proscribe even more procompetitive conduct that consumers want.

Sections 2(b)(1) and 2(b)(9)

The wording of these provisions is extremely broad and, as drafted, would seem to challenge even the existence of vertically integrated products. As such, these prohibitions are potentially even more extensive and invasive than Section 2(a) would have been. Even a narrower reading here would seem to preclude safety and privacy features that are valuable to many users. iOS’s sandboxing of apps, for example, serves to limit the damage that a malware app can do on a user’s device precisely because of the limitations it imposes on what other features and hardware the app can access.

Section 2(b)(2)

This provision would preclude a firm from conditioning preferred status on use of another service from that firm. This would likely undermine the purpose of platforms, which is to absorb and counter some of the risks involved in doing business online. An example of this is Amazon’s tying eligibility for its Prime program to sellers that use Amazon’s delivery service (FBA – Fulfilled By Amazon). The bill seems to presume in an example like this that Amazon is leveraging its power in the market—in the form of the value of the Prime label—to profit from delivery. But Amazon could, and already does, charge directly for listing positions; it’s unclear why it would benefit from charging via FBA when it could just charge for the Prime label.

An alternate, simpler explanation is that FBA improves the quality of the service, by granting customers greater assurance that a Prime product will arrive when Amazon says it will. Platforms add value by setting out rules and providing services that reduce the uncertainties between buyers and sellers they’d otherwise experience if they transacted directly with each other. This section’s prohibition—which, as written, would seem to prevent any kind of quality assurance—likely would bar labelling by a platform, even where customers explicitly want it.

Section 2(b)(3)

As written, this would prohibit platforms from using aggregated data to improve their services at all. If Apple found that 99% of its users uninstalled an app immediately after it was installed, it would be reasonable to conclude that the app may be harmful or broken in some way, and that Apple should investigate. This provision would ban that.

Sections 2(b)(4) and 2(b)(6)

These two provisions effectively prohibit a platform from using information it does not also provide to sellers. Such prohibitions ignore the fact that it is often good for sellers to lack certain information, since withholding information can prevent abuse by malicious users. For example, a seller may sometimes try to bribe their customers to post positive reviews of their products, or even threaten customers who have posted negative ones. Part of the role of a platform is to combat that kind of behavior by acting as a middleman and forcing both consumer users and business users to comply with the platform’s own mechanisms to control that kind of behavior.

If this seems overly generous to platforms—since, obviously, it gives them a lot of leverage over business users—ask yourself why people use platforms at all. It is not a coincidence that people often prefer Amazon to dealing with third-party merchants and having to navigate those merchants’ sites themselves. The assurance that Amazon provides is extremely valuable for users. Much of it comes from the company’s ability to act as a middleman in this way, lowering the transaction costs between buyers and sellers.

Section 2(b)(5)

This provision restricts the treatment of defaults. It is, however, relatively restrained when compared to, for example, the DOJ’s lawsuit against Google, which treats as anticompetitive even payment for defaults that can be changed. Still, many of the arguments that apply in that case also apply here: default status for apps can be a way to recoup income foregone elsewhere (e.g., a browser provided for free that makes its money by selling the right to be the default search engine).

Section 2(b)(7)

This section gets to the heart of why “discrimination” can often be procompetitive: that it facilitates competition between platforms. The kind of self-preferencing that this provision would prohibit can allow firms that have a presence in one market to extend that position into another, increasing competition in the process. Both Apple and Amazon have used their customer bases in smartphones and e-commerce, respectively, to grow their customer bases for video streaming, in competition with Netflix, Google’s YouTube, cable television, and each other. If Apple designed a search engine to compete with Google, it would do exactly the same thing, and we would be better off because of it. Restricting this kind of behavior is, perversely, exactly what you would do if you wanted to shield these incumbents from competition.

Section 2(b)(8)

As with other provisions, this one would preclude one of the mechanisms by which platforms add value: creating assurance for customers about the products they can expect if they visit the platform. Some of this relates to child protection; some of the most frustrating stories involve children being overcharged when they use an iPhone or Android app, and effectively being ripped off because of poor policing of the app (or insufficiently strict pricing rules by Apple or Google). This may also relate to rules that state that the seller cannot offer a cheaper product elsewhere (Amazon’s “General Pricing Rule” does this, for example). Prohibiting this would simply impose a tax on customers who cannot shop around and would prefer to use a platform that they trust has the lowest prices for the item they want.

Section 2(b)(10)

Ostensibly a “whistleblower” provision, this section could leave platforms with no recourse, not even removing a user from its platform, in response to spurious complaints intended purely to extract value for the complaining business rather than to promote competition. On its own, this sort of provision may be fairly harmless, but combined with the provisions above, it allows the bill to add up to a rent-seekers’ charter.

Conclusion

In each case above, it’s vital to remember that a reversed burden of proof applies. So, there is a high chance that the law will side against the defendant business, and a large downside for conduct that ends up being found to violate these provisions. That means that platforms will likely err on the side of caution in many cases, avoiding conduct that is ambiguous, and society will probably lose a lot of beneficial behavior in the process.

Put together, the provisions undermine much of what has become an Internet platform’s role: to act as an intermediary, de-risk transactions between customers and merchants who don’t know each other, and tweak the rules of the market to maximize its attractiveness as a place to do business. The “discrimination” that the bill would outlaw is, in practice, behavior that makes it easier for consumers to navigate marketplaces of extreme complexity and uncertainty, in which they often know little or nothing about the firms with whom they are trying to transact business.

Customers do not want platforms to be neutral, open utilities. They can choose platforms that are like that already, such as eBay. They generally tend to prefer ones like Amazon, which are not neutral and which carefully cultivate their service to be as streamlined, managed, and “discriminatory” as possible. Indeed, many of people’s biggest complaints with digital platforms relate to their openness: the fake reviews, counterfeit products, malware, and spam that come with letting more unknown businesses use your service. While these may be unavoidable by-products of running a platform, platforms compete on their ability to ferret them out. Customers are unlikely to thank legislators for regulating Amazon into being another eBay.

The language of the federal antitrust laws is extremely general. Over more than a century, the federal courts have applied common-law techniques to construe this general language to provide guidance to the private sector as to what does or does not run afoul of the law. The interpretive process has been fraught with some uncertainty, as judicial approaches to antitrust analysis have changed several times over the past century. Nevertheless, until very recently, judges and enforcers had converged toward relying on a consumer welfare standard as the touchstone for antitrust evaluations (see my antitrust primer here, for an overview).

While imperfect and subject to potential error in application—a problem of legal interpretation generally—the consumer welfare principle has worked rather well as the focus both for antitrust-enforcement guidance and judicial decision-making. The general stability and predictability of antitrust under a consumer welfare framework has advanced the rule of law. It has given businesses sufficient information to plan transactions in a manner likely to avoid antitrust liability. It thereby has cabined uncertainty and increased the probability that private parties would enter welfare-enhancing commercial arrangements, to the benefit of society.

In a very thoughtful 2017 speech, then Acting Assistant Attorney General for Antitrust Andrew Finch commented on the importance of the rule of law to principled antitrust enforcement. He noted:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

That is certainly not to say there has not been positive change in the antitrust laws in the past, or that we would have been better off without those changes. U.S. antitrust law has been refined, and occasionally recalibrated, with the courts playing their appropriate interpretive role. And enforcers must always be on the watch for new or evolving threats to competition.  As markets evolve and products develop over time, our analysis adapts. But as those changes occur, we pursue reliability and consistency in application in the antitrust laws as much as possible.

Indeed, we have enjoyed remarkable continuity and consensus for many years. Antitrust law in the U.S. has not been a “paradox” for quite some time, but rather a stable and valuable law enforcement regime with appropriately widespread support.

Unfortunately, policy decisions taken by the new Federal Trade Commission (FTC) leadership in recent weeks have rejected antitrust continuity and consensus. They have injected substantial uncertainty into the application of competition-law enforcement by the FTC. This abrupt change in emphasis undermines the rule of law and threatens to reduce economic welfare.

As of now, the FTC’s departure from the rule of law has been notable in two areas:

  1. Its rejection of previous guidance on the agency’s “unfair methods of competition” authority, the FTC’s primary non-merger-related enforcement tool; and
  2. Its new advice rejecting time limits for the review of generally routine proposed mergers.

In addition, potential FTC rulemakings directed at “unfair methods of competition” would, if pursued, prove highly problematic.

Rescission of the Unfair Methods of Competition Policy Statement

The FTC on July 1 voted 3-2 to rescind the 2015 FTC Policy Statement Regarding Unfair Methods of Competition under Section 5 of the FTC Act (UMC Policy Statement).

The bipartisan UMC Policy Statement has originally been supported by all three Democratic commissioners, including then-Chairwoman Edith Ramirez. The policy statement generally respected and promoted the rule of law by emphasizing that, in applying the facially broad “unfair methods of competition” (UMC) language, the FTC would be guided by the well-established principles of the antitrust rule of reason (including considering any associated cognizable efficiencies and business justifications) and the consumer welfare standard. The FTC also explained that it would not apply “standalone” Section 5 theories to conduct that would violate the Sherman or Clayton Acts.

In short, the UMC Policy Statement sent a strong signal that the commission would apply UMC in a manner fully consistent with accepted and well-understood antitrust policy principles. As in the past, the vast bulk of FTC Section 5 prosecutions would be brought against conduct that violated the core antitrust laws. Standalone Section 5 cases would be directed solely at those few practices that harmed consumer welfare and competition, but somehow fell into a narrow crack in the basic antitrust statutes (such as, perhaps, “invitations to collude” that lack plausible efficiency justifications). Although the UMC Statement did not answer all questions regarding what specific practices would justify standalone UMC challenges, it substantially limited business uncertainty by bringing Section 5 within the boundaries of settled antitrust doctrine.

The FTC’s announcement of the UMC Policy Statement rescission unhelpfully proclaimed that “the time is right for the Commission to rethink its approach and to recommit to its mandate to police unfair methods of competition even if they are outside the ambit of the Sherman or Clayton Acts.” As a dissenting statement by Commissioner Christine S. Wilson warned, consumers would be harmed by the commission’s decision to prioritize other unnamed interests. And as Commissioner Noah Joshua Phillips stressed in his dissent, the end result would be reduced guidance and greater uncertainty.

In sum, by suddenly leaving private parties in the dark as to how to conform themselves to Section 5’s UMC requirements, the FTC’s rescission offends the rule of law.

New Guidance to Parties Considering Mergers

For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:

  1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
  2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.

Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.

Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.

Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”

The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.

Perhaps more significantly, as three prominent antitrust practitioners point out, the FTC’s warning letters states that:

[T]he FTC may challenge deals that “threaten to reduce competition and harm consumers, workers, and honest businesses.” Adding in harm to both “workers and honest businesses” implies that the FTC may be considering more ways that transactions can have an adverse impact other than just harm to competition and consumers [citation omitted].

Because consensus antitrust merger analysis centers on consumer welfare, not the protection of labor or business interests, any suggestion that the FTC may be extending its reach to these new areas is inconsistent with established legal principles and generates new business-planning risks.

More generally, the Aug. 6 FTC “blog post could be viewed as an attempt to modify the temporal framework of the HSR Act”—in effect, an effort to displace an implicit statutory understanding in favor of an agency diktat, contrary to the rule of law. Commissioner Wilson sees the blog post as a means to keep investigations open indefinitely and, thus, an attack on the decades-old HSR framework for handling most merger reviews in an expeditious fashion (see here). Commissioner Phillips is concerned about an attempt to chill legal M&A transactions across the board, particularly unfortunate when there is no reason to conclude that particular transactions are illegal (see here).

Finally, the historical record raises serious questions about the “resource constraint” justification for the FTC’s new merger review policies:

Through the end of July 2021, more than 2,900 transactions were reported to the FTC. It is not clear, however, whether these record-breaking HSR filing numbers have led (or will lead) to more deals being investigated. Historically, only about 13 percent of all deals reported are investigated in some fashion, and roughly 3 percent of all deals reported receive a more thorough, substantive review through the issuance of a Second Request. Even if more deals are being reported, for the majority of transactions, the HSR process is purely administrative, raising no antitrust concerns, and, theoretically, uses few, if any, agency resources. [Citations omitted.]

Proposed FTC Competition Rulemakings

The new FTC leadership is strongly considering competition rulemakings. As I explained in a recent Truth on the Market post, such rulemakings would fail a cost-benefit test. They raise serious legal risks for the commission and could impose wasted resource costs on the FTC and on private parties. More significantly, they would raise two very serious economic policy concerns:

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules. [Emphasis added.]

In short, common law antitrust adjudication, focused on the consumer welfare standard, has done a good job of promoting a vibrant competitive economy in an efficient fashion. FTC competition rulemaking would not.

Conclusion

Recent FTC actions have undermined consensus antitrust-enforcement standards and have departed from established merger-review procedures with respect to seemingly uncontroversial consolidations. Those decisions have imposed costly uncertainty on the business sector and are thereby likely to disincentivize efficiency-seeking arrangements. What’s more, by implicitly rejecting consensus antitrust principles, they denigrate the primacy of the rule of law in antitrust enforcement. The FTC’s pursuit of competition rulemaking would further damage the rule of law by imposing arbitrary strictures that ignore matter-specific considerations bearing on the justifications for particular business decisions.

Fortunately, these are early days in the Biden administration. The problematic initial policy decisions delineated in this comment could be reversed based on further reflection and deliberation within the commission. Chairwoman Lina Khan and her fellow Democratic commissioners would benefit by consulting more closely with Commissioners Wilson and Phillips to reach agreement on substantive and procedural enforcement policies that are better tailored to promote consumer welfare and enhance vibrant competition. Such policies would benefit the U.S. economy in a manner consistent with the rule of law.

The recent launch of the international Multilateral Pharmaceutical Merger Task Force (MPMTF) is just the latest example of burgeoning cooperative efforts by leading competition agencies to promote convergence in antitrust enforcement. (See my recent paper on the globalization of antitrust, which assesses multinational cooperation and convergence initiatives in greater detail.) In what is a first, the U.S. Federal Trade Commission (FTC), the U.S. Justice Department’s (DOJ) Antitrust Division, offices of state Attorneys General, the European Commission’s Competition Directorate, Canada’s Competition Bureau, and the U.K.’s Competition and Market Authority (CMA) jointly created the MPMTF in March 2021 “to update their approach to analyzing the effects of pharmaceutical mergers.”

To help inform its analysis, in May 2021 the MPMTF requested public comments concerning the effects of pharmaceutical mergers. The MPMTF sought submissions regarding (among other issues) seven sets of questions:   

  1. What theories of harm should enforcement agencies consider when evaluating pharmaceutical mergers, including theories of harm beyond those currently considered?
  2. What is the full range of a pharmaceutical merger’s effects on innovation? What challenges arise when mergers involve proprietary drug discovery and manufacturing platforms?
  3. In pharmaceutical merger review, how should we consider the risks or effects of conduct such as price-setting practices, reverse payments, and other ways in which pharmaceutical companies respond to or rely on regulatory processes?
  4. How should we approach market definition in pharmaceutical mergers, and how is that implicated by new or evolving theories of harm?
  5. What evidence may be relevant or necessary to assess and, if applicable, challenge a pharmaceutical merger based on any new or expanded theories of harm?
  6. What types of remedies would work in the cases to which those theories are applied?
  7. What factors, such as the scope of assets and characteristics of divestiture buyers, influence the likelihood and success of pharmaceutical divestitures to resolve competitive concerns?

My research assistant Andrew Mercado and I recently submitted comments for the record addressing the questions posed by the MPMTF. We concluded:

Federal merger enforcement in general and FTC pharmaceutical merger enforcement in particular have been effective in promoting competition and consumer welfare. Proposed statutory amendments to strengthen merger enforcement not only are unnecessary, but also would, if enacted, tend to undermine welfare and would thus be poor public policy. A brief analysis of seven questions propounded by the Multilateral Pharmaceutical Merger Task Force suggests that: (a) significant changes in enforcement policies are not warranted; and (b) investigators should employ sound law and economics analysis, taking full account of merger-related efficiencies, when evaluating pharmaceutical mergers. 

While we leave it to interested readers to review our specific comments, this commentary highlights one key issue which we stressed—the importance of giving due weight to efficiencies (and, in particular, dynamic efficiencies) in evaluating pharma mergers. We also note an important critique by FTC Commissioner Christine Wilson of the treatment accorded merger-related efficiencies by U.S. antitrust enforcers.   

Discussion

Innovation in pharmaceuticals and vaccines has immensely significant economic and social consequences, as demonstrated most recently in the handling of the COVID-19 pandemic. As such, it is particularly important that public policy not stand in the way of realizing efficiencies that promote innovation in these markets. This observation applies directly, of course, to pharmaceutical antitrust enforcement, in general, and to pharma merger enforcement, in particular.

Regrettably, however, though general merger-enforcement policy has been generally sound, it has somewhat undervalued merger-related efficiencies.

Although U.S. antitrust enforcers give lip service to their serious consideration of efficiencies in merger reviews, the reality appears to be quite different, as documented by Commissioner Wilson in a 2020 speech.

Wilson’s General Merger-Efficiencies Critique: According to Wilson, the combination of finding narrow markets and refusing to weigh out-of-market efficiencies has created major “legal and evidentiary hurdles a defendant must clear when seeking to prove offsetting procompetitive efficiencies.” What’s more, the “courts [have] largely continue[d] to follow the Agencies’ lead in minimizing the importance of efficiencies.” Wilson shows that “the Horizontal Merger Guidelines text and case law appear to set different standards for demonstrating harms and efficiencies,” and argues that this “asymmetric approach has the obvious potential consequence of preventing some procompetitive mergers that increase consumer welfare.” Wilson concludes on a more positive note that this problem can be addressed by having enforcers: (1) treat harms and efficiencies symmetrically; and (2) establish clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.

While our filing with the MPMTF did not discuss Wilson’s general treatment of merger efficiencies, one would hope that the task force will appropriately weigh it in its deliberations. Our filing instead briefly addressed two “informational efficiencies” that may arise in the context of pharmaceutical mergers. These include:

More Efficient Resource Reallocation: The theory of the firm teaches that mergers may be motivated by the underutilization or misallocation of assets, or the opportunity to create welfare-enhancing synergies. In the pharmaceutical industry, these synergies may come from joining complementary research and development programs, combining diverse and specialized expertise that may be leveraged for better, faster drug development and more innovation.

Enhanced R&D: Currently, much of the R&D for large pharmaceutical companies is achieved through partnerships or investment in small biotechnology and research firms specializing in a single type of therapy. Whereas large pharmaceutical companies have expertise in marketing, navigating regulation, and undertaking trials of new drugs, small, research-focused firms can achieve greater advancements in medicine with smaller budgets. Furthermore, changes within firms brought about by a merger may increase innovation.

With increases in intellectual property and proprietary data that come from the merging of two companies, smaller research firms that work with the merged entity may have access to greater pools of information, enhancing the potential for innovation without increasing spending. This change not only raises the efficiency of the research being conducted in these small firms, but also increases the probability of a breakthrough without an increase in risk.

Conclusion

U.S. pharmaceutical merger enforcement has been fairly effective in forestalling anticompetitive combinations while allowing consumer welfare-enhancing transactions to go forward. Policy in this area should remain generally the same. Enforcers should continue to base enforcement decisions on sound economic theory fully supported by case-specific facts. Enforcement agencies could benefit, however, by placing a greater emphasis on efficiencies analysis. In particular, they should treat harms and efficiencies symmetrically (as recommend by Commissioner Wilson), and fully take into account likely resource reallocation and innovation-related efficiencies. 

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

Image by Gerd Altmann from Pixabay

AT&T’s $102 billion acquisition of Time Warner in 2019 will go down in M&A history as an exceptionally ill-advised transaction, resulting in the loss of tens of billions of dollars of shareholder value. It should also go down in history as an exceptional ill-chosen target of antitrust intervention.  The U.S. Department of Justice, with support from many academic and policy commentators, asserted with confidence that the vertical combination of these content and distribution powerhouses would result in an entity that could exercise market power to the detriment of competitors and consumers.

The chorus of condemnation continued with vigor even after the DOJ’s loss in court and AT&T’s consummation of the transaction. With AT&T’s May 17 announcement that it will unwind the two-year-old acquisition and therefore abandon its strategy to integrate content and distribution, it is clear these predictions of impending market dominance were unfounded. 

This widely shared overstatement of antitrust risk derives from a simple but fundamental error: regulators and commentators were looking at the wrong market.  

The DOJ’s Antitrust Case against the Transaction

The business case for the AT&T/Time Warner transaction was straightforward: it promised to generate synergies by combining a leading provider of wireless, broadband, and satellite television services with a leading supplier of video content. The DOJ’s antitrust case against the transaction was similarly straightforward: the combined entity would have the ability to foreclose “must have” content from other “pay TV” (cable and satellite television) distributors, resulting in adverse competitive effects. 

This foreclosure strategy was expected to take two principal forms. First, AT&T could temporarily withhold (or threaten to withhold) content from rival distributors absent payment of a higher carriage fee, which would then translate into higher fees for subscribers. Second, AT&T could permanently withhold content from rival distributors, who would then lose subscribers to AT&T’s DirectTV satellite television service, further enhancing AT&T’s market power. 

Many commentators, both in the trade press and significant portions of the scholarly community, characterized the transaction as posing a high-risk threat to competitive conditions in the pay TV market. These assertions reflected the view that the new entity would exercise a bottleneck position over video-content distribution in the pay TV market and would exercise that power to impose one-sided terms to the detriment of content distributors and consumers. 

Notwithstanding this bevy of endorsements, the DOJ’s case was rejected by the district court and the decision was upheld by the D.C. appellate court. The district judge concluded that the DOJ had failed to show that the combined entity would exercise any credible threat to withhold “must have” content from distributors. A key reason: the lost carriage fees AT&T would incur if it did withhold content were so high, and the migration of subscribers from rival pay TV services so speculative, that it would represent an obviously irrational business strategy. In short: no sophisticated business party would ever take AT&T’s foreclosure threat seriously, in which case the DOJ’s predictions of market power were insufficiently compelling to justify the use of government power to block the transaction.

The Fundamental Flaws in the DOJ’s Antitrust Case

The logical and factual infirmities of the DOJ’s foreclosure hypothesis have been extensively and ably covered elsewhere and I will not repeat that analysis. Following up on my previous TOTM commentary on the transaction, I would like to emphasize the point that the DOJ’s case against the transaction was flawed from the outset for two more fundamental reasons. 

False Assumption #1

The assumption that the combined entity could withhold so-called “must have” content to cause significant and lasting competitive injury to rival distributors flies in the face of market realities.  Content is an abundant, renewable, and mobile resource. There are few entry barriers to the content industry: a commercially promising idea will likely attract capital, which will in turn secure the necessary equipment and personnel for production purposes. Any rival distributor can access a rich menu of valuable content from a plethora of sources, both domestically and worldwide, each of which can provide new content, as required. Even if the combined entity held a license to distribute purportedly “must have” content, that content would be up for sale (more precisely, re-licensing) to the highest bidder as soon as the applicable contract term expired. This is not mere theorizing: it is a widely recognized feature of the entertainment industry.

False Assumption #2

Even assuming the combined entity could wield a portfolio of “must have” content to secure a dominant position in the pay TV market and raise content acquisition costs for rival pay TV services, it still would lack any meaningful pricing power in the relevant consumer market. The reason: significant portions of the viewing population do not want any pay TV or only want dramatically “slimmed-down” packages. Instead, viewers increasingly consume content primarily through video-streaming services—a market in which platforms such as Amazon and Netflix already enjoyed leading positions at the time of the transaction. Hence, even accepting the DOJ’s theory that the combined entity could somehow monopolize the pay TV market consisting of cable and satellite television services, the theory still fails to show any reasonable expectation of anticompetitive effects in the broader and economically relevant market comprising pay TV and streaming services.  Any attempt to exercise pricing power in the pay TV market would be economically self-defeating, since it would likely prompt a significant portion of consumers to switch to (or start to only use) streaming services.

The Antitrust Case for the Transaction

When properly situated within the market that was actually being targeted in the AT&T/Time Warner acquisition, the combined entity posed little credible threat of exercising pricing power. To the contrary, the combined entity was best understood as an entrant that sought to challenge the two pioneer entities—Amazon and Netflix—in the “over the top” content market.

Each of these incumbent platforms individually had (and have) multi-billion-dollar content production budgets that rival or exceed the budgets of major Hollywood studios and enjoy worldwide subscriber bases numbering in the hundreds of millions. If that’s not enough, AT&T was not the only entity that observed the displacement of pay TV by streaming services, as illustrated by the roughly concurrent entry of Disney’s Disney+ service, Apple’s Apple TV+ service, Comcast NBCUniversal’s Peacock service, and others. Both the existing and new competitors are formidable entities operating in a market with formidable capital requirements. In 2019, Netflix, Amazon, and Apple TV expended approximately $15 billion, $6 billion, and again, $6 billion, respectively, on content; by contrast, HBO Max, AT&T’s streaming service, expended approximately $3.5 billion. 

In short, the combined entity faced stiff competition from existing and reasonably anticipated competitors, requiring several billions of dollars on “content spend” to even stay in the running. Far from being able to exercise pricing power in an imaginary market defined by DOJ litigators for strategic purposes, the AT&T/Time Warner entity faced the challenge of merely surviving in a real-world market populated by several exceptionally well-financed competitors. At best, the combined entity “threatened” to deliver incremental competitive benefits by adding a robust new platform to the video-streaming market; at worst, it would fail in this objective and cause no incremental competitive harm. As it turns out, the latter appears to be the case.

The Enduring Virtues of Antitrust Prudence

AT&T’s M&A fiasco has important lessons for broader antitrust debates about the evidentiary standards that should be applied by courts and agencies when assessing alleged antitrust violations, in general, and vertical restraints, in particular.  

Among some scholars, regulators, and legislators, it has become increasingly received wisdom that prevailing evidentiary standards, as reflected in federal case law and agency guidelines, are excessively demanding, and have purportedly induced chronic underenforcement. It has been widely asserted that the courts’ and regulators’ focus on avoiding “false positives” and the associated costs of disrupting innocuous or beneficial business practices has resulted in an overly cautious enforcement posture, especially with respect to mergers and vertical restraints.

In fact, these views were expressed by some commentators in endorsing the antitrust case against the AT&T/Time-Warner transaction. Some legislators have gone further and argued for substantial amendments to the antitrust law to provide enforcers and courts with greater latitude to block or re-engineer combinations that would not pose sufficiently demonstrated competitive risks under current statutory or case law.

The swift downfall of the AT&T/Time-Warner transaction casts great doubt on this critique and accompanying policy proposals. It was precisely the district court’s rigorous application of those “overly” demanding evidentiary standards that avoided what would have been a clear false-positive error. The failure of the “blockbuster” combination to achieve not only market dominance, but even reasonably successful entry, validates the wisdom of retaining those standards.

The fundamental mismatch between the widely supported antitrust case against the transaction and the widely overlooked business realities of the economically relevant consumer market illustrates the ease with which largely theoretical and decontextualized economic models of competitive harm can lead to enforcement actions that lack any reasonable basis in fact.   

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.

In a constructive development, the Federal Trade Commission has joined its British counterpart in investigating Nvidia’s proposed $40 billion acquisition of chip designer Arm, a subsidiary of Softbank. Arm provides the technological blueprints for wireless communications devices and, subject to a royalty fee, makes those crown-jewel assets available to all interested firms. Notwithstanding Nvidia’s stated commitment to keep the existing policy in place, there is an obvious risk that the new parent, one of the world’s leading chip makers, would at some time modify this policy with adverse competitive effects.

Ironically, the FTC is likely part of the reason that the Nvidia-Arm transaction is taking place.

Since the mid-2000s, the FTC and other leading competition regulators (except for the U.S. Department of Justice’s Antitrust Division under the leadership of former Assistant Attorney General Makan Delrahim) have intervened extensively in licensing arrangements in wireless device markets, culminating in the FTC’s recent failed suit against Qualcomm. The Nvidia-Arm transaction suggests that these actions may simply lead chip designers to abandon the licensing model and shift toward structures that monetize chip-design R&D through integrated hardware and software ecosystems. Amazon and Apple are already undertaking chip innovation through this model. Antitrust action that accelerates this movement toward in-house chip design is likely to have adverse effects for the competitive health of the wireless ecosystem.

How IP Licensing Promotes Market Access

Since its inception, the wireless communications market has relied on a handful of IP licensors to supply device producers and other intermediate users with a common suite of technology inputs. The result has been an efficient division of labor between firms that specialize in upstream innovation and firms that specialize in production and other downstream functions. Contrary to the standard assumption that IP rights limit access, this licensing-based model ensures technology access to any firm willing to pay the royalty fee.

Efforts by regulators to reengineer existing relationships between innovators and implementers endanger this market structure by inducing innovators to abandon licensing-based business models, which now operate under a cloud of legal insecurity, for integrated business models in which returns on R&D investments are captured internally through hardware and software products. Rather than expanding technology access and intensifying competition, antitrust restraints on licensing freedom are liable to limit technology access and increase market concentration.

Regulatory Intervention and Market Distortion

This interventionist approach has relied on the assertion that innovators can “lock in” producers and extract a disproportionate fee in exchange for access. This prediction has never found support in fact. Contrary to theoretical arguments that patent owners can impose double-digit “royalty stacks” on device producers, empirical researchers have repeatedly found that the estimated range of aggregate rates lies in the single digits. These findings are unsurprising given market performance over more than two decades: adoption has accelerated as quality-adjusted prices have fallen and innovation has never ceased. If rates were exorbitant, market growth would have been slow, and the smartphone would be a luxury for the rich.

Despite these empirical infirmities, the FTC and other competition regulators have persisted in taking action to mitigate “holdup risk” through policy statements and enforcement actions designed to preclude IP licensors from seeking injunctive relief. The result is a one-sided legal environment in which the world’s largest device producers can effectively infringe patents at will, knowing that the worst-case scenario is a “reasonable royalty” award determined by a court, plus attorneys’ fees. Without any credible threat to deny access even after a favorable adjudication on the merits, any IP licensor’s ability to negotiate a royalty rate that reflects the value of its technology contribution is constrained.

Assuming no change in IP licensing policy on the horizon, it is therefore not surprising that an IP licensor would seek to shift toward an integrated business model in which IP is not licensed but embedded within an integrated suite of products and services. Or alternatively, an IP licensor entity might seek to be acquired by a firm that already has such a model in place. Hence, FTC v. Qualcomm leads Arm to Nvidia.

The Error Costs of Non-Evidence-Based Antitrust

These counterproductive effects of antitrust intervention demonstrate the error costs that arise when regulators act based on unverified assertions of impending market failure. Relying on the somewhat improbable assumption that chip suppliers can dictate licensing terms to device producers that are among the world’s largest companies, competition regulators have placed at risk the legal predicates of IP rights and enforceable contracts that have made the wireless-device market an economic success. As antitrust risk intensifies, the return on licensing strategies falls and competitive advantage shifts toward integrated firms that can monetize R&D internally through stand-alone product and service ecosystems.

Far from increasing competitiveness, regulators’ current approach toward IP licensing in wireless markets is likely to reduce it.