Archives For barriers to entry

Over the past decade and a half, virtually every branch of the federal government has taken steps to weaken the patent system. As reflected in President Joe Biden’s July 2021 executive order, these restraints on patent enforcement are now being coupled with antitrust policies that, in large part, adopt a “big is bad” approach in place of decades of economically grounded case law and agency guidelines.

This policy bundle is nothing new. It largely replicates the innovation policies pursued during the late New Deal and the postwar decades. That historical experience suggests that a “weak-patent/strong-antitrust” approach is likely to encourage neither innovation nor competition.

The Overlooked Shortfalls of New Deal Innovation Policy

Starting in the early 1930s, the U.S. Supreme Court issued a sequence of decisions that raised obstacles to patent enforcement. The Franklin Roosevelt administration sought to take this policy a step further, advocating compulsory licensing for all patents. While Congress did not adopt this proposal, it was partially implemented as a de facto matter through antitrust enforcement. Starting in the early 1940s and continuing throughout the postwar decades, the antitrust agencies secured judicial precedents that treated a broad range of licensing practices as per se illegal. Perhaps most dramatically, the U.S. Justice Department (DOJ) secured more than 100 compulsory licensing orders against some of the nation’s largest companies. 

The rationale behind these policies was straightforward. By compelling access to incumbents’ patented technologies, courts and regulators would lower barriers to entry and competition would intensify. The postwar economy declined to comply with policymakers’ expectations. Implementation of a weak-IP/strong-antitrust innovation policy over the course of four decades yielded the opposite of its intended outcome. 

Market concentration did not diminish, turnover in market leadership was slow, and private research and development (R&D) was confined mostly to the research labs of the largest corporations (who often relied on generous infusions of federal defense funding). These tendencies are illustrated by the dramatically unequal allocation of innovation capital in the postwar economy.  As of the late 1950s, small firms represented approximately 7% of all private U.S. R&D expenditures.  Two decades later, that figure had fallen even further. By the late 1970s, patenting rates had plunged, and entrepreneurship and innovation were in a state of widely lamented decline.

Why Weak IP Raises Entry Costs and Promotes Concentration

The decline in entrepreneurial innovation under a weak-IP regime was not accidental. Rather, this outcome can be derived logically from the economics of information markets.

Without secure IP rights to establish exclusivity, engage securely with business partners, and deter imitators, potential innovator-entrepreneurs had little hope to obtain funding from investors. In contrast, incumbents could fund R&D internally (or with federal funds that flowed mostly to the largest computing, communications, and aerospace firms) and, even under a weak-IP regime, were protected by difficult-to-match production and distribution efficiencies. As a result, R&D mostly took place inside the closed ecosystems maintained by incumbents such as AT&T, IBM, and GE.

Paradoxically, the antitrust campaign against patent “monopolies” most likely raised entry barriers and promoted industry concentration by removing a critical tool that smaller firms might have used to challenge incumbents that could outperform on every competitive parameter except innovation. While the large corporate labs of the postwar era are rightly credited with technological breakthroughs, incumbents such as AT&T were often slow in transforming breakthroughs in basic research into commercially viable products and services for consumers. Without an immediate competitive threat, there was no rush to do so. 

Back to the Future: Innovation Policy in the New New Deal

Policymakers are now at work reassembling almost the exact same policy bundle that ended in the innovation malaise of the 1970s, accompanied by a similar reliance on public R&D funding disbursed through administrative processes. However well-intentioned, these processes are inherently exposed to political distortions that are absent in an innovation environment that relies mostly on private R&D funding governed by price signals. 

This policy bundle has emerged incrementally since approximately the mid-2000s, through a sequence of complementary actions by every branch of the federal government.

  • In 2011, Congress enacted the America Invents Act, which enables any party to challenge the validity of an issued patent through the U.S. Patent and Trademark Office’s (USPTO) Patent Trial and Appeals Board (PTAB). Since PTAB’s establishment, large information-technology companies that advocated for the act have been among the leading challengers.
  • In May 2021, the Office of the U.S. Trade Representative (USTR) declared its support for a worldwide suspension of IP protections over Covid-19-related innovations (rather than adopting the more nuanced approach of preserving patent protections and expanding funding to accelerate vaccine distribution).  
  • President Biden’s July 2021 executive order states that “the Attorney General and the Secretary of Commerce are encouraged to consider whether to revise their position on the intersection of the intellectual property and antitrust laws, including by considering whether to revise the Policy Statement on Remedies for Standard-Essential Patents Subject to Voluntary F/RAND Commitments.” This suggests that the administration has already determined to retract or significantly modify the 2019 joint policy statement in which the DOJ, USPTO, and the National Institutes of Standards and Technology (NIST) had rejected the view that standard-essential patent owners posed a high risk of patent holdup, which would therefore justify special limitations on enforcement and licensing activities.

The history of U.S. technology markets and policies casts great doubt on the wisdom of this weak-IP policy trajectory. The repeated devaluation of IP rights is likely to be a “lose-lose” approach that does little to promote competition, while endangering the incentive and transactional structures that sustain robust innovation ecosystems. A weak-IP regime is particularly likely to disadvantage smaller firms in biotech, medical devices, and certain information-technology segments that rely on patents to secure funding from venture capital and to partner with larger firms that can accelerate progress toward market release. The BioNTech/Pfizer alliance in the production and distribution of a Covid-19 vaccine illustrates how patents can enable such partnerships to accelerate market release.  

The innovative contribution of BioNTech is hardly a one-off occurrence. The restoration of robust patent protection in the early 1980s was followed by a sharp increase in the percentage of private R&D expenditures attributable to small firms, which jumped from about 5% as of 1980 to 21% by 1992. This contrasts sharply with the unequal allocation of R&D activities during the postwar period.

Remarkably, the resurgence of small-firm innovation following the strong-IP policy shift, starting in the late 20th century, mimics tendencies observed during the late 19th and early-20th centuries, when U.S. courts provided a hospitable venue for patent enforcement; there were few antitrust constraints on licensing activities; and innovation was often led by small firms in partnership with outside investors. This historical pattern, encompassing more than a century of U.S. technology markets, strongly suggests that strengthening IP rights tends to yield a policy “win-win” that bolsters both innovative and competitive intensity. 

An Alternate Path: ‘Bottom-Up’ Innovation Policy

To be clear, the alternative to the policy bundle of weak-IP/strong antitrust does not consist of a simple reversion to blind enforcement of patents and lax administration of the antitrust laws. A nuanced innovation policy would couple modern antitrust’s commitment to evidence-based enforcement—which, in particular cases, supports vigorous intervention—with a renewed commitment to protecting IP rights for innovator-entrepreneurs. That would promote competition from the “bottom up” by bolstering maverick innovators who are well-positioned to challenge (or sometimes partner with) incumbents and maintaining the self-starting engine of creative disruption that has repeatedly driven entrepreneurial innovation environments. Tellingly, technology incumbents have often been among the leading advocates for limiting patent and copyright protections.  

Advocates of a weak-patent/strong-antitrust policy believe it will enhance competitive and innovative intensity in technology markets. History suggests that this combination is likely to produce the opposite outcome.  

Jonathan M. Barnett is the Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law. This post is based on the author’s recent publications, Innovators, Firms, and Markets: The Organizational Logic of Intellectual Property (Oxford University Press 2021) and “The Great Patent Grab,” in Battles Over Patents: History and the Politics of Innovation (eds. Stephen H. Haber and Naomi R. Lamoreaux, Oxford University Press 2021).

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.

In current discussions of technology markets, few words are heard more often than “platform.” Initial public offering (IPO) prospectuses use “platform” to describe a service that is bound to dominate a digital market. Antitrust regulators use “platform” to describe a service that dominates a digital market or threatens to do so. In either case, “platform” denotes power over price. For investors, that implies exceptional profits; for regulators, that implies competitive harm.

Conventional wisdom holds that platforms enjoy high market shares, protected by high barriers to entry, which yield high returns. This simple logic drives the market’s attribution of dramatically high valuations to dramatically unprofitable businesses and regulators’ eagerness to intervene in digital platform markets characterized by declining prices, increased convenience, and expanded variety, often at zero out-of-pocket cost. In both cases, “burning cash” today is understood as the path to market dominance and the ability to extract a premium from consumers in the future.

This logic is usually wrong. 

The Overlooked Basics of Platform Economics

To appreciate this perhaps surprising point, it is necessary to go back to the increasingly overlooked basics of platform economics. A platform can refer to any service that matches two complementary populations. A search engine matches advertisers with consumers, an online music service matches performers and labels with listeners, and a food-delivery service matches restaurants with home diners. A platform benefits everyone by facilitating transactions that otherwise might never have occurred.

A platform’s economic value derives from its ability to lower transaction costs by funneling a multitude of individual transactions into a single convenient hub.  In pursuit of minimum costs and maximum gains, users on one side of the platform will tend to favor the most popular platforms that offer the largest number of users on the other side of the platform. (There are partial exceptions to this rule when users value being matched with certain typesof other users, rather than just with more users.) These “network effects” mean that any successful platform market will always converge toward a handful of winners. This positive feedback effect drives investors’ exuberance and regulators’ concerns.

There is a critical point, however, that often seems to be overlooked.

Market share only translates into market power to the extent the incumbent is protected against entry within some reasonable time horizon.  If Warren Buffett’s moat requirement is not met, market share is immaterial. If XYZ.com owns 100% of the online pet food delivery market but entry costs are asymptotic, then market power is negligible. There is another important limiting principle. In platform markets, the depth of the moat depends not only on competitors’ costs to enter the market, but users’ costs in switching from one platform to another or alternating between multiple platforms. If users can easily hop across platforms, then market share cannot confer market power given the continuous threat of user defection. Put differently: churn limits power over price.

Contrary to natural intuitions, this is why a platform market consisting of only a few leaders can still be intensely competitive, keeping prices low (down to and including $0) even if the number of competitors is low. It is often asserted, however, that users are typically locked into the dominant platform and therefore face high switching costs, which therefore implicitly satisfies the moat requirement. If that is true, then the “high churn” scenario is a theoretical curiosity and a leading platform’s high market share would be a reliable signal of market power. In fact, this common assumption likely describes the atypical case. 

AWS and the Cloud Data-Storage Market

This point can be illustrated by considering the cloud data-storage market. This would appear to be an easy case where high switching costs (due to the difficulty in shifting data among storage providers) insulate the market leader against entry threats. Yet the real world does not conform to these expectations. 

While Amazon Web Services pioneered the $100 billion-plus market and is still the clear market leader, it now faces vigorous competition from Microsoft Azure, Google Cloud, and other data-storage or other cloud-related services. This may reflect the fact that the data storage market is far from saturated, so new users are up for grabs and existing customers can mitigate lock-in by diversifying across multiple storage providers. Or it may reflect the fact that the market’s structure is fluid as a function of technological changes, enabling entry at formerly bundled portions of the cloud data-services package. While it is not always technologically feasible, the cloud storage market suggests that users’ resistance to platform capture can represent a competitive opportunity for entrants to challenge dominant vendors on price, quality, and innovation parameters.

The Surprising Instability of Platform Dominance

The instability of leadership positions in the cloud storage market is not exceptional. 

Consider a handful of once-powerful platforms that were rapidly dethroned once challenged by a more efficient or innovative rival: Yahoo and Alta Vista in the search-engine market (displaced by Google); Netscape in the browser market (displaced by Microsoft’s Internet Explorer, then displaced by Google Chrome); Nokia and then BlackBerry in the mobile wireless-device market (displaced by Apple and Samsung); and Friendster in the social-networking market (displaced by Myspace, then displaced by Facebook). AOL was once thought to be indomitable; now it is mostly referenced as a vintage email address. The list could go on.

Overestimating platform dominance—or more precisely, assuming platform dominance without close factual inquiry—matters because it promotes overestimates of market power. That, in turn, cultivates both market and regulatory bubbles: investors inflate stock valuations while regulators inflate the risk of competitive harm. 

DoorDash and the Food-Delivery Services Market

Consider the DoorDash IPO that launched in early December 2020. The market’s current approximately $50 billion valuation of a business that has been almost consistently unprofitable implicitly assumes that DoorDash will maintain and expand its position as the largest U.S. food-delivery platform, which will then yield power over price and exceptional returns for investors. 

There are reasons to be skeptical. Even where DoorDash captures and holds a dominant market share in certain metropolitan areas, it still faces actual and potential competition from other food-delivery services, in-house delivery services (especially by well-resourced national chains), and grocery and other delivery services already offered by regional and national providers. There is already evidence of these expected responses to DoorDash’s perceived high delivery fees, a classic illustration of the disciplinary effect of competitive forces on the pricing choices of an apparently dominant market leader. These “supply-side” constraints imposed by competitors are compounded by “demand-side” constraints imposed by customers. Home diners incur no more than minimal costs when swiping across food-delivery icons on a smartphone interface, casting doubt that high market share is likely to translate in this context into market power.

Deliveroo and the Costs of Regulatory Autopilot

Just as the stock market can suffer from delusions of platform grandeur, so too some competition regulators appear to have fallen prey to the same malady. 

A vivid illustration is provided by the 2019 decision by the Competition Markets Authority (CMA), the British competition regulator, to challenge Amazon’s purchase of a 16% stake in Deliveroo, one of three major competitors in the British food-delivery services market. This intervention provides perhaps the clearest illustration of policy action based on a reflexive assumption of market power, even in the face of little to no indication that the predicate conditions for that assumption could plausibly be satisfied.

Far from being a dominant platform, Deliveroo was (and is) a money-losing venture lagging behind money-losing Just Eat (now Just Eat Takeaway) and Uber Eats in the U.K. food-delivery services market. Even Amazon had previously closed its own food-delivery service in the U.K. due to lack of profitability. Despite Deliveroo’s distressed economic circumstances and the implausibility of any market power arising from Amazon’s investment, the CMA nonetheless elected to pursue the fullest level of investigation. While the transaction was ultimately approved in August 2020, this intervention imposed a 15-month delay and associated costs in connection with an investment that almost certainly bolstered competition in a concentrated market by funding a firm reportedly at risk of insolvency.  This is the equivalent of a competition regulator driving in reverse.

Concluding Thoughts

There seems to be an increasingly common assumption in commentary by the press, policymakers, and even some scholars that apparently dominant platforms usually face little competition and can set, at will, the terms of exchange. For investors, this is a reason to buy; for regulators, this is a reason to intervene. This assumption is sometimes realized, and, in that case, antitrust intervention is appropriate whenever there is reasonable evidence that market power is being secured through something other than “competition on the merits.” However, several conditions must be met to support the market power assumption without which any such inquiry would be imprudent. Contrary to conventional wisdom, the economics and history of platform markets suggest that those conditions are infrequently satisfied.

Without closer scrutiny, reflexively equating market share with market power is prone to lead both investors and regulators astray.  

The U.S. Supreme Court will hear a challenge next month to the 9th U.S. Circuit Court of Appeals’ 2020 decision in NCAA v. Alston. Alston affirmed a district court decision that enjoined the National Collegiate Athletic Association (NCAA) from enforcing rules that restrict the education-related benefits its member institutions may offer students who play Football Bowl Subdivision football and Division I basketball.

This will be the first Supreme Court review of NCAA practices since NCAA v. Board of Regents in 1984, which applied the antitrust rule of reason in striking down the NCAA’s “artificial limit” on the quantity of televised college football games, but also recognized that “this case involves an industry in which horizontal restraints on competition are essential if the product [intercollegiate athletic contests] is to be available at all.” Significantly, in commenting on the nature of appropriate, competition-enhancing NCAA restrictions, the court in Board of Regents stated that:

[I]n order to preserve the character and quality of the [NCAA] ‘product,’ athletes must not be paid, must be required to attend class, and the like. And the integrity of the ‘product’ cannot be preserved except by mutual agreement; if an institution adopted such restrictions unilaterally, its effectiveness as a competitor on the playing field might soon be destroyed. Thus, the NCAA plays a vital role in enabling college football to preserve its character, and as a result enables a product to be marketed which might otherwise be unavailable. In performing this role, its actions widen consumer choice – not only the choices available to sports fans but also those available to athletes – and hence can be viewed as procompetitive. [footnote citation omitted]

One’s view of the Alston case may be shaped by one’s priors regarding the true nature of the NCAA. Is the NCAA a benevolent Dr. Jekyll, which seeks to promote amateurism and fairness in college sports to the benefit of student athletes and the general public?  Or is its benevolent façade a charade?  Although perhaps a force for good in its early years, has the NCAA transformed itself into an evil Mr. Hyde, using restrictive rules to maintain welfare-inimical monopoly power as a seller cartel of athletic events and a monopsony employer cartel that suppresses athletes’ wages? I will return to this question—and its bearing on the appropriate resolution of this legal dispute—after addressing key contentions by both sides in Alston.

Summarizing the Arguments in NCAA v Alston

The Alston class-action case followed in the wake of the 9th Circuit’s decision in O’Bannon v. NCAA (2015). O’Bannon affirmed in large part a district court’s ruling that the NCAA illegally restrained trade, in violation of Section 1 of the Sherman Act, by preventing football and men’s basketball players from receiving compensation for the use of their names, images, and likenesses. It also affirmed the district court’s injunction insofar as it required the NCAA to implement the less restrictive alternative of permitting athletic scholarships for the full cost of attendance. (I commented approvingly on the 9th Circuit’s decision in a previous TOTM post.) 

Subsequent antitrust actions by student-athletes were consolidated in the district court. After a bench trial, the district court entered judgment for the student-athletes, concluding in part that NCAA limits on education-related benefits were unreasonable restraints of trade. It enjoined those limits but declined to hold that other NCAA limits on compensation unrelated to education likewise violated Section 1.

In May 2020, a 9th Circuit panel held that the district court properly applied the three-step Sherman Act Section 1 rule of reason analysis in determining that the enjoined rules were unlawful restraints of trade.

First, the panel concluded that the student-athletes carried their burden at step one by showing that the restraints produced significant anticompetitive effects within the relevant market for student-athletes’ labor.

At step two, the NCAA was required to come forward with evidence of the restraints’ procompetitive effects. The panel endorsed the district court’s conclusion that only some of the challenged NCAA rules served the procompetitive purpose of preserving amateurism and thus improving consumer choice by maintaining a distinction between college and professional sports. Those rules were limits on above-cost-of-attendance payments unrelated to education, the cost-of-attendance cap on athletic scholarships, and certain restrictions on cash academic or graduation awards and incentives. The panel affirmed the district court’s conclusion that the remaining rules—restricting non-cash education-related benefits—did nothing to foster or preserve consumer demand. The panel held that the record amply supported the findings of the district court, which relied on demand analysis, survey evidence, and NCAA testimony.

The panel also affirmed the district court’s conclusion that, at step three, the student-athletes showed that any legitimate objectives could be achieved in a substantially less restrictive manner. The district court identified a less restrictive alternative of prohibiting the NCAA from capping certain education-related benefits and limiting academic or graduation awards or incentives below the maximum amount that an individual athlete may receive in athletic participation awards, while permitting individual conferences to set limits on education-related benefits. The panel held that the district court did not clearly err in determining that this alternative would be virtually as effective in serving the procompetitive purposes of the NCAA’s current rules and could be implemented without significantly increased cost.

Finally, the panel held that the district court’s injunction was not impermissibly vague and did not usurp the NCAA’s role as the superintendent of college sports. The panel also declined to broaden the injunction to include all NCAA compensation limits, including those on payments untethered to education. The panel concluded that the district court struck the right balance in crafting a remedy that both prevented anticompetitive harm to student-athletes while serving the procompetitive purpose of preserving the popularity of college sports.

The NCAA appealed to the Supreme Court, which granted the NCAA’s petition for certiorari Dec. 16, 2020. The NCAA contends that under Board of Regents, the NCAA rules regarding student-athlete compensation are reasonably related to preserving amateurism in college sports, are procompetitive, and should have been upheld after a short deferential review, rather than the full three-step rule of reason. According to the NCAA’s petition for certiorari, even under the detailed rule of reason, the 9th Circuit’s decision was defective. Specifically:

The Ninth Circuit … relieved plaintiffs of their burden to prove that the challenged rules unreasonably restrain trade, instead placing a “heavy burden” on the NCAA … to prove that each category of its rules is procompetitive and that an alternative compensation regime created by the district court could not preserve the procompetitive distinction between college and professional sports. That alternative regime—under which the NCAA must permit student-athletes to receive unlimited “education-related benefits,” including post-eligibility internships that pay unlimited amounts in cash and can be used for recruiting or retention—will vitiate the distinction between college and professional sports. And via the permanent injunction the Ninth Circuit upheld, the alternative regime will also effectively make a single judge in California the superintendent of a significant component of college sports. The Ninth Circuit’s approval of this judicial micromanagement of the NCAA denies the NCAA the latitude this Court has said it needs, and endorses unduly stringent scrutiny of agreements that define the central features of sports leagues’ and other joint ventures’ products. The decision thus twists the rule of reason into a tool to punish (and thereby deter) procompetitive activity.

Two amicus briefs support the NCAA’s position. One, filed on behalf of “antitrust law and business school professors,” stresses that the 9th Circuit’s decision misapplied the third step of the rule of reason by requiring defendants to show that their conduct was the least restrictive means available (instead of requiring plaintiff to prove the existence of an equally effective but less restrictive rule). More broadly:

[This approach] permits antitrust plaintiffs to commandeer the judiciary and use it to regulate and modify routine business conduct, so long as that conduct is not the least restrictive conduct imaginable by a plaintiff’s attorney or district judge. In turn, the risk that procompetitive ventures may be deemed unlawful and subject to treble damages liability simply because they could have operated in a marginally less restrictive manner is likely to chill beneficial business conduct.

A second brief, filed on behalf of “antitrust economists,” emphasizes that the NCAA has adapted the rules governing design of its product (college amateur sports) over time to meet consumer demand and to prevent colleges from pursuing their own interests (such as “pay to  play”) in ways that would conflict with the overall procompetitive aims of the collaboration. While acknowledging that antitrust courts are free to scrutinize collaborations’ rules that go beyond the design of the product itself (such as the NCAA’s broadcast restrictions), the brief cites key Supreme Court decisions (NCAA v. Board of Regents and Texaco Inc. v. Dagher), for the proposition that courts should stay out of restrictions on the core activity of the joint venture itself. It then summarizes the policy justification for such judicial non-interference:

Permitting judges and juries to apply the Sherman Act to such decisions [regarding core joint venture activity] will inevitably create uncertainty that undermines innovation and investment incentives across any number of industries and collaborative ventures. In these circumstances, antitrust courts would be making public policy regarding the desirability of a product with particular features, as opposed to ferreting out agreements or unilateral conduct that restricts output, raises prices, or reduces innovation to the detriment of consumers.

In their brief opposing certiorari, counsel for Alston take the position that, in reality, the NCAA is seeking a special antitrust exemption for its competitively restrictive conduct—an issue that should be determined by Congress, not courts. Their brief notes that the concept of “amateurism” has changed over the years and that some increases in athletes’ compensation have been allowed over time. Thus, in the context of big-time college football and basketball:

[A]mateurism is little more than a pretext. It is certainly not a Sherman Act concept, much less a get-out-of-jail-free card that insulates any particular set of NCAA restraints from scrutiny.

Who Has the Better Case?

The NCAA’s position is a strong one. Association rules touching on compensation for college athletes are part of the core nature of the NCAA’s “amateur sports” product, as the Supreme Court stated (albeit in dictum) in Board of Regents. Furthermore, subsequent Supreme Court jurisprudence (see 2010’s American Needle Inc. v. NFL) has eschewed second-guessing of joint-venture product design decisions—which, in the case of the NCAA, involve formulating the restrictions (such as whether and how to compensate athletes) that are deemed key to defining amateurism.

The Alston amicus curiae briefs ably set forth the strong policy considerations that support this approach, centered on preserving incentives for the development of efficient welfare-generating joint ventures. Requiring joint venturers to provide “least restrictive means” justifications for design decisions discourages innovative activity and generates costly uncertainty for joint-venture planners, to the detriment of producers and consumers (who benefit from joint-venture innovations) alike. Claims by defendant Alston that the NCAA is in effect seeking to obtain a judicial antitrust exemption miss the mark; rather, the NCAA merely appears to be arguing that antitrust should be limited to evaluating restrictions that fall outside the scope of the association’s core mission. Significantly, as discussed in the NCAA’s brief petitioning for certiorari, other federal courts of appeals decisions in the 3rd, 5th, and 7th Circuits have treated NCAA bylaws going to the definition of amateurism in college sports as presumptively procompetitive and not subject to close scrutiny. Thus, based on the arguments set forth by litigants, a Supreme Court victory for the NCAA in Alston would appear sound as a matter of law and economics.

There may, however, be a catch. Some popular commentary has portrayed the NCAA as a malign organization that benefits affluent universities (and their well-compensated coaches) while allowing member colleges to exploit athletes by denying them fair pay—in effect, an institutional Mr. Hyde.

What’s more, consistent with the Mr. Hyde story, a number of major free-market economists (including, among others, Nobel laureate Gary Becker) have portrayed the NCAA as an anticompetitive monopsony employer cartel that has suppressed the labor market demand for student athletes, thereby limiting their wages, fringe benefits, and employment opportunities. (In a similar vein, the NCAA is seen as a monopolist seller cartel in the market for athletic events.) Consistent with this perspective, promoting the public good of amateurism (the Dr. Jekyll story) is merely a pretextual façade (a cover story, if you will) for welfare-inimical naked cartel conduct. If one buys this alternative story, all core product restrictions adopted by the NCAA should be fair game for close antitrust scrutiny—and thus, the 9th Circuit’s decision in Alston merits affirmation as a matter of antitrust policy.

There is, however, a persuasive response to the cartel story, set forth in Richard McKenzie and Dwight Lee’s essay “The NCAA:  A Case Study of the Misuse of the Monopsony and Monopoly Models” (Chapter 8 of their 2008 book “In Defense of Monopoly:  How Market Power Fosters Creative Production”). McKenzie and Lee examine the evidence bearing on economists’ monopsony cartel assertions (and, in particular, the evidence presented in a 1992 study by Arthur Fleischer, Brian Goff, and Richard Tollison) and find it wanting:

Our analysis leads inexorably to the conclusion that the conventional economic wisdom regarding the intent and consequences of NCAA restrictions is hardly as solid, on conceptual grounds, as the NCAA critics assert, often without citing relevant court cases. We have argued that the conventional wisdom is wrong in suggesting that, as a general proposition,

• college athletes are materially “underpaid” and are “exploited”;

• cheating on NCAA rules is prima facie evidence of a cartel intending to restrict employment and suppress athletes’ wages;

• NCAA rules violate conventional antitrust doctrine;          

• barriers to entry ensure the continuance of the NCAA’s monopsony powers over athletes.

No such entry barriers (other than normal organizational costs, which need to be covered to meet any known efficiency test for new entrants) exist. In addition, the Supreme Court’s decision in NCAA indicates that the NCAA would be unable to prevent through the courts the emergence of competing athletic associations. The actual existence of other athletic associations indicates that entry would be not only possible but also practical if athletes’ wages were materially suppressed.

Conventional economic analysis of NCAA rules that we have challenged also is misleading in suggesting that collegiate sports would necessarily be improved if the NCAA were denied the authority to regulate the payment of athletes. Given the absence of legal barriers to entry into the athletic association market, it appears that if athletes’ wages were materially suppressed (or as grossly suppressed as the critics claim), alternative sports associations would form or expand, and the NCAA would be unable to maintain its presumed monopsony market position. The incentive for colleges and universities to break with the NCAA would be overwhelming.

From our interpretation of NCAA rules, it does not follow necessarily that athletes should not receive any more compensation than they do currently. Clearly, market conditions change, and NCAA rules often must be adjusted to accommodate those changes. In the absence of entry barriers, we can expect the NCAA to adjust, as it has adjusted, in a competitive manner its rules of play, recruitment, and retention of athletes. Our central point is that contrary to the proponents of the monopsony thesis, the collegiate athletic market is subject to the self-correcting mechanism of market pressures. We have reason to believe that the proposed extension of the antitrust enforcement to the NCAA rules or proposed changes in sports law explicitly or implicitly recommended by the proponents of the cartel thesis would be not only unnecessary but also counterproductive.

Although a closer examination of the McKenzie and Lee’s critique of the economists’ cartel story is beyond the scope of this comment, I find it compelling.

Conclusion

In sum, the claim that antitrust may properly be applied to combat the alleged “exploitation” of college athletes by NCAA compensation regulations does not stand up to scrutiny. The NCAA’s rules that define the scope of amateurism may be imperfect, but there is no reason to think that empowering federal judges to second guess and reformulate NCAA athletic compensation rules would yield a more socially beneficial (let alone optimal) outcome. (Believing that the federal judiciary can optimally reengineer core NCAA amateurism rules is a prime example of the Nirvana fallacy at work.)  Furthermore, a Supreme Court decision affirming the 9th Circuit could do broad mischief by undermining case law that has accorded joint venturers substantial latitude to design the core features of their collective enterprise without judicial second-guessing. It is to be hoped that the Supreme Court will do the right thing and strongly reaffirm the NCAA’s authority to design and reformulate its core athletic amateurism product as it sees fit.

The goal of US antitrust law is to ensure that competition continues to produce positive results for consumers and the economy in general. We published a letter co-signed by twenty three of the U.S.’s leading economists, legal scholars and practitioners, including one winner of the Nobel Prize in economics (full list of signatories here), to exactly that effect urging the House Judiciary Committee on the State of Antitrust Law to reject calls for radical upheaval of antitrust law that would, among other things, undermine the independence and neutrality of US antitrust law. 

A critical part of maintaining independence and neutrality in the administration of antitrust is ensuring that it is insulated from politics. Unfortunately, this view is under attack from all sides. The President sees widespread misconduct among US tech firms that he believes are controlled by the “radical left” and is, apparently, happy to use whatever tools are at hand to chasten them. 

Meanwhile, Senator Klobuchar has claimed, without any real evidence, that the mooted Uber/Grubhub merger is simply about monopolisation of the market, and not, for example, related to the huge changes that businesses like this are facing because of the Covid shutdown.

Both of these statements challenge the principle that the rule of law depends on being politically neutral, including in antitrust. 

Our letter, contrary to the claims made by President Trump, Sen. Klobuchar and some of the claims made to the Committee, asserts that the evidence and economic theory is clear: existing antitrust law is doing a good job of promoting competition and consumer welfare in digital markets and the economy more broadly, and concludes that the Committee should focus on reforms that improve antitrust at the margin, not changes that throw out decades of practice and precedent.

The letter argues that:

  1. The American economy—including the digital sector—is competitive, innovative, and serves consumers well, contrary to how it is sometimes portrayed in the public debate. 
  2. Structural changes in the economy have resulted from increased competition, and increases in national concentration have generally happened because competition at the local level has intensified and local concentration has fallen.
  3. Lax antitrust enforcement has not allowed systematic increases in market power, and the evidence simply does not support out the idea that antitrust enforcement has weakened in recent decades.
  4. Existing antitrust law is adequate for protecting competition in the modern economy, and built up through years of careful case-by-case scrutiny. Calls to throw out decades of precedent to achieve an antitrust “Year Zero” would throw away a huge body of learning and deliberation.
  5. History teaches that discarding the modern approach to antitrust would harm consumers, and return to a situation where per se rules prohibited the use of economic analysis and fact-based defences of business practices.
  6. Common sense reforms should be pursued to improve antitrust enforcement, and the reforms proposed in the letter could help to improve competition and consumer outcomes in the United States without overturning the whole system.

The reforms suggested include measures to increase transparency of the DoJ and FTC, greater scope for antitrust challenges against state-sponsored monopolies, stronger penalties for criminal cartel conduct, and more agency resources being made available to protect workers from anti-competitive wage-fixing agreements between businesses. These are suggestions for the House Committee to consider and are not supported by all the letter’s signatories.

Some of the arguments in the letter are set out in greater detail in the ICLE’s own submission to the Committee, which goes into detail about the nature of competition in modern digital markets and in traditional markets that have been changed because of the adoption of digital technologies. 

The full letter is here.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati), and Kenneth Edelson (Associate, Wilson Sonsini Goodrich & Rosati).]

So we now have 21st Century Vertical Merger Guidelines, at least in draft. Yay. Do they tell us anything? Yes! Do they tell us much? No. But at least it’s a start.

* * * * *

In November 2018, the FTC held hearings on vertical merger analysis devoted to the questions of whether the agencies should issue new guidelines, and what guidance those guidelines should provide. And, indeed, on January 10, 2020, the DOJ and FTC issued their new Draft Vertical Merger Guidelines (“Draft Guidelines”). That new guidance has finally been issued is a welcome development. The antitrust community has been calling for new vertical merger guidelines for some time. The last vertical merger guidelines were issued in 1984, and there is broad consensus in the antitrust community – despite vigorous debate on correct legal treatment of vertical mergers – that the ’84 Guidelines are outdated and should be withdrawn. Despite disagreement on the best enforcement policy, there is general recognition that the legal rules applicable to vertical mergers need clarification. New guidelines are especially important in light of recent high-visibility merger challenges, including the government’s challenge to the ATT/Time Warner merger, the first vertical merger case litigated since the 1970s. These merger challenges have occurred in an environment in which there is little up-to-date case law to guide courts or agencies and the ’84 Guidelines have been rendered obsolete by subsequent developments in economics. 

The discussion here focuses on what the new Draft Guidelines do say, key issues on which they do not weigh in, and where additional guidance would be desirable

What the Draft Guidelines do say

The Draft Guidelines start with a relevant market requirement – making clear that the agencies will identify at least one relevant market in which a vertical merger may foreclose competition. However, the Draft Guidelines do not require a market definition for the vertically related upstream or downstream market(s) in the merger. Rather, the agencies’ proposed policy is to identify one or more “related products.” The Draft Guidelines define a related product as

a product or service that is supplied by the merged firm, is vertically related to the products and services in the relevant market, and to which access by the merged firm’s rivals affects competition in the relevant market.

The Draft Guidelines’ most significant (and most concrete) proposal is a loose safe harbor based on market share and the percentage of use of the related product in the relevant market of interest. The Draft Guidelines suggest that agencies are not likely to challenge mergers if two conditions are met: (1) the merging company has less than 20% market share in the relevant market, and (2) less than 20% of the relevant market uses the related product identified by the agencies. 

This proposed safe harbor is welcome. Generally, in order for a vertical merger to have anticompetitive effects, both the upstream and downstream markets involved need to be concentrated, and the merging firms’ shares of both markets have to be substantial – although the Draft Guidelines do not contain any such requirements. Mergers in which the merging company has less than a 20% market share of the relevant market, and in which less than 20% of the market uses the vertically related product are unlikely to have serious anticompetitive effects.

However, the proposed safe harbor does not provide much certainty. After describing the safe harbor, the Draft Guidelines offer a caveat: meeting the proposed 20% thresholds will not serve as a “rigid screen” for the agencies to separate out mergers that are unlikely to have anticompetitive effects. Accordingly, the guidelines as currently drafted do not guarantee that vertical mergers in which market share and related product use fall below 20% would be immune from agency scrutiny. So, while the proposed safe harbor is a welcome statement of good policy that may guide agency staff and courts in analyzing market share and share of relevant product use, it is not a true safe harbor. This ambiguity limits the safe harbor’s utility for the purpose of counseling clients on market share issues.

The Draft Guidelines also identify a number of specific unilateral anticompetitive effects that, in the agencies’ view, may result from vertical mergers (the Draft Guidelines note that coordinated effects will be evaluated consistent with the Horizontal Merger Guidelines). Most importantly, the guidelines name raising rivals’ costs, foreclosure, and access to competitively sensitive information as potential unilateral effects of vertical mergers. The Draft Guidelines indicate that the agency may consider the following issues: would foreclosure or raising rivals’ costs (1) cause rivals to lose sales; (2) benefit the post-merger firm’s business in the relevant market; (3) be profitable to the firm; and (4) be beyond a de minimis level, such that it could substantially lessen competition? Mergers where all four conditions are met, the Draft Guidelines say, often warrant competitive scrutiny. While the big picture guidance about what agencies find concerning is helpful, the Draft Guidelines are short on details that would make this a useful statement of enforcement policy, or sufficiently reliable to guide practitioners in counseling clients. Most importantly, the Draft guidelines give no indication of what the agencies will consider a de minimis level of foreclosure.

The Draft Guidelines also articulate a concern with access to competitively sensitive information, as in the recent Staples/Essendant enforcement action. There, the FTC permitted the merger after imposing a firewall that blocked Staples from accessing certain information about its rivals held by Essendant. This contrasts with the current DOJ approach of hostility to behavioral remedies.

What the Draft Guidelines don’t say

The Draft Guidelines also decline to weigh in on a number of important issues in the debates over vertical mergers. Two points are particularly noteworthy.

First, the Draft Guidelines decline to allocate the parties’ proof burdens on key issues. The burden-shifting framework established in U.S. v. Baker Hughes is regularly used in horizontal merger cases, and was recently adopted in AT&T/Time-Warner in a vertical context. The framework has three phases: (1) the plaintiff bears the burden of establishing a prima facie case that the merger will substantially lessen competition in the relevant market; (2) the defendant bears the burden of producing evidence to demonstrate that the merger’s procompetitive effects outweigh the alleged anticompetitive effects; and (3) the plaintiff bears the burden of countering the defendant’s rebuttal, and bears the ultimate burden of persuasion. Virtually everyone agrees that this or some similar structure should be used. However, the Draft Guidelines’ silence on the appropriate burden is consistent with the agencies’ historical practice: The 2010 Horizontal Merger Guidelines allocate no burdens and the 1997 Merger Guidelines explicitly decline to assign the burden of proof or production on any issue.

Second, the Draft Guidelines take an unclear approach to elimination of double marginalization (EDM). The appropriate treatment of EDM has been one of the key topics in the debates on the law and economics of vertical mergers, but the Draft Guidelines take no position on the key issues in the conversation about EDM: whether it should be presumed in a vertical merger, and whether it should be presumed to be merger-specific.

EDM may occur if two vertically related firms merge and the new firm captures the margins of both the upstream and downstream firms. After the merger, the downstream firm gets its input at cost, allowing the merged firm to eliminate one party’s markup. This makes price reduction profitable for the merged firm where it would not have been for either firm before the merger. 

The Draft Guidelines state that the agencies will not challenge vertical mergers where EDM means that the merger is unlikely to be anticompetitive. OK. Duh. However, they also claim that in some situations, EDM may not occur, or its benefits may be offset by other incentives for the merged firm to raise prices. The Draft Guidelines do not weigh in on whether it should be presumed that vertical mergers will result in EDM, or whether it should be presumed that EDM is merger-specific. 

These are the most important questions in the debate over EDM. Some economists take the position that EDM is not guaranteed, and not necessarily merger-specific. Others take the position that EDM is basically inevitable in a vertical merger, and is unlikely to be achieved without a merger. That is: if there is EDM, it should be presumed to be merger-specific. Those who take the former view would put the burden on the merging parties to establish pricing benefits of EDM and its merger-specificity. 

Our own view is that this efficiency is pervasive and significant in vertical mergers. The defense should therefore bear only a burden of producing evidence, and the agencies should bear the burden of disproving the significance of EDM where shown to exist. This would depart from the typical standard in a merger case, under which defendants must prove the reality, magnitude, and merger-specific character of the claimed efficiencies (the Draft Guidelines adopt this standard along with the approach of the 2010 Horizontal Merger Guidelines on efficiencies). However, it would more closely reflect the economic reality of most vertical mergers. 

Conclusion

While the Draft Guidelines are a welcome step forward in the debates around the law and economics of vertical mergers, they do not guide very much. The fact that the Draft Guidelines highlight certain issues is a useful indicator of what the agencies find important, but not a meaningful statement of enforcement policy. 

On a positive note, the Draft Guidelines’ explanations of certain economic concepts important to vertical mergers may serve to illuminate these issues for courts

However, the agencies’ proposals are not specific enough to create predictability for business or the antitrust bar or provide meaningful guidance for enforcers to develop a consistent enforcement policy. This result is not surprising given the lack of consensus on the law and economics of vertical mergers and the best approach to enforcement. But the antitrust community — and all of its participants — would be better served by a more detailed document that commits to positions on key issues in the relevant debates. 

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.]

This post is authored by Gregory J. Werden (former Senior Economic Counsel, DOJ Antitrust Division (ret.)) and Luke M. Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Owen School of Management, Vanderbilt University; former Chief Economist, DOJ Antitrust Division; former Chief Economist, FTC).]

The proposed Vertical Merger Guidelines provide little practical guidance, especially on the key issue of what would lead one of the Agencies to determine that it will not challenge a vertical merger. Although they list the theories on which the Agencies focus and factors the Agencies “may consider,” the proposed Guidelines do not set out conditions necessary or sufficient for the Agencies to conclude that a merger likely would substantially lessen competition. Nor do the Guidelines communicate generally how the Agencies analyze the nature of a competitive process and how it is apt to change with a proposed merger. 

The proposed Guidelines communicate the Agencies’ enforcement policy in part through silences. For example, the Guidelines do not mention several theories that have appeared in recent commentary and thereby signal that Agencies have decided not to base their analysis on those theories. That silence is constructive, but the Agencies’ silence on the nature of their concern with vertical mergers is not. Since 1982, the Agencies’ merger guidelines have always stated that their concern was market power. Silence on this subject might suggest that the Agencies’ enforcement against vertical mergers is directed to something else. 

The Guidelines’ most conspicuous silence concerns the Agencies’ general attitude toward vertical mergers, and on how vertical and horizontal mergers differ. This silence is deafening: Horizontal mergers combine substitutes, which tends to reduce competition, while vertical mergers combine complements, which tends to enhance efficiency and thus also competition. Unlike horizontal mergers, vertical mergers produce anticompetitive effects only through indirect mechanisms with many moving parts, which makes the prediction of competitive effects from vertical mergers more complex and less certain.

The Guidelines also are unhelpfully silent on the basic economics of vertical integration, and hence of vertical mergers. In assessing a vertical merger, it is essential to appreciate that vertical mergers solve coordination problems that are solved less well, or not at all, by contracts. By solving different coordination problems, a vertical merger can generate merger-specific efficiencies or eliminate double marginalization. But solving a coordination problem need not be a good thing: Competition is the ultimate coordination problem, and a vertical merger can have anticompetitive consequences by helping to solve that coordination problem.   Finally, the Guidelines are unhelpfully silent on the fundamental policy issue presented by vertical merger enforcement: What distinguishes a vertical merger that harms competition from a vertical merger that merely harm competitors? A vertical merger cannot directly eliminate rivalry by increasing market concentration. The Supreme Court has endorsed a foreclosure theory under which the merger directly causes injury to a rival and thus proximately causes diminished rivalry. Vertical mergers also might diminish rivalry in other ways, but the proposed Guidelines do not state that the Agencies view diminished rivalry as the hallmark of a lessening of competition.   

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.]

This post is authored by Joshua D. Wright (University Professor of Law, George Mason University and former Commissioner, FTC); Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; and former Assistant Attorney General, DOJ Antitrust Division); Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division); and John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics).]

After much anticipation, the Department of Justice Antitrust Division and the Federal Trade Commission released a draft of the Vertical Merger Guidelines (VMGs) on January 10, 2020. The Global Antitrust Institute (GAI) will be submitting formal comments to the agencies regarding the VMGs and this post summarizes our main points.

The Draft VMGs supersede the 1984 Merger Guidelines, which represent the last guidance from the agencies on the treatment of vertical mergers. The VMGs provide valuable guidance and greater clarity in terms of how the agencies will review vertical mergers going forward. While the proposed VMGs generally articulate an analytical framework based upon sound economic principles, there are several ways that the VMGs could more deeply integrate sound economics and our empirical understanding of the competitive consequences of vertical integration.

In this post, we discuss four issues: (1) incorporating the elimination of double marginalization (EDM) into the analysis of the likelihood of a unilateral price effect; (2) eliminating the role of market shares and structural analysis; (3) highlighting that the weight of empirical evidence supports the proposition that vertical mergers are less likely to generate competitive concerns than horizontal mergers; and (4) recognizing the importance of transaction cost-based efficiencies.

Elimination of double marginalization is a unilateral price effect

EDM is discussed separately from both unilateral price effects, in Section 5, and efficiencies, in Section 9, of the draft VMGs. This is notable because the structure of the VMGs obfuscates the relevant economics of internalizing pricing externalities and may encourage the misguided view that EDM is a special form of efficiency.

When separate upstream and downstream entities price their products, they do not fully take into account the impact of their pricing decision on each other — even though they are ultimately part of the same value chain for a given product. Vertical mergers eliminate a pricing externality since the post-merger upstream and downstream units are fully aligned in terms of their pricing incentives. In this sense, EDM is indistinguishable from the unilateral effects discussed in Section 5 of the VMGs that cause upward pricing pressure. Specifically, in the context of mergers, just as there is a greater incentive, under certain conditions, to foreclose or raise rivals’ costs (RRC) post-merger (although, this does not mean there is an ability to engage in these behaviors), there is also an incentive to lower prices due to the elimination of a markup along the supply chain. Consequently, we really cannot assess unilateral effects without accounting for the full set of incentives that could move prices in either direction.

Further, it is improper to consider EDM in the context of a “net effect” given that this phrase has strong connotations with weighing efficiencies against findings of anticompetitive harm. Rather, “unilateral price effects” actually includes EDM — just as a finding that a merger will induce entry properly belongs in a unilateral effects analysis. For these reasons, we suggest incorporating the discussion of EDM into the discussion of unilateral effects contained in Section 5 of the VMGs and eliminating Section 6. Otherwise, by separating EDM into its own section, the agencies are creating a type of “limbo” between unilateral effects and efficiencies — which creates confusion, particularly for courts. It is also important to emphasize that the mere existence of alternative contracting mechanisms to mitigate double marginalization does not tell us about their relative efficacy compared to vertical integration as there are costs to contracting.

Role of market shares and structural analysis

In Section 3 (“Market Participants, Market Shares, and Market Concentration”), there are two notable statements. First,

[t]he Agencies…do not rely on changes in concentration as a screen for or indicator of competitive effects from vertical theories of harm.

This statement, without further explanation, is puzzling as there are no changes in concentration for vertical mergers. Second, the VMGs then go on to state that 

[t]he Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.

The very next sentence reads:

In some circumstances, mergers with shares below the thresholds can give rise to competitive concerns.

From this, we conclude that the VMGs are adopting a prior belief that, if both the relevant product and the related product have a less than 20 percent share in the relevant market, the acquisition is either competitively neutral or benign. The VMGs make clear, however, they do not offer a safe harbor. With these statements, the agencies run the risk that the 20 percent figure will be interpreted as a trigger for competitive concern. There is no sound economic reason to believe 20 percent share in the relevant market or the related market is of any particular importance to predicting competitive effects. The VMGs should eliminate the discussion of market shares altogether. At a minimum, the final guidelines would benefit from some explanation for this threshold if it is retained.

Empirical evidence on the welfare impact of vertical mergers

In contrast to vertical mergers, horizontal mergers inherently involve a degree of competitive overlap and an associated loss of at least some degree of rivalry between actual and/or potential competitors. The price effect for vertical mergers, however, is generally theoretically ambiguous — even before accounting for efficiencies — due to EDM and the uncertainty regarding whether the integrated firm has an incentive to raise rivals’ costs or foreclose. Thus, for vertical mergers, empirically evaluating the welfare effects of consummated mergers has been and remains an important area of research to guide antitrust policy.

Consequently, what is noticeably absent from the draft guidelines is an empirical grounding. Consistent empirical findings should inform agency decision-making priors. With few exceptions, the literature does not support the view that these practices are used for anticompetitive reasons — see Lafontaine & Slade (2007) and Cooper et al. (2005). (For an update on the empirical literature from 2009 through 2018, which confirms the conclusions of the prior literature, see the GAI’s Comment on Vertical Mergers submitted during the recent FTC Hearings.) Thus, the modern antitrust approach to vertical mergers, as reflected in the antitrust literature, should reflect the empirical reality that vertical relationships are generally procompetitive or neutral.

The bottom line is that how often vertical mergers are anticompetitive should influence our framework and priors. Given the strong empirical evidence that vertical mergers do not tend to result in welfare losses for consumers, we believe the agencies should consider at least the modest statement that vertical mergers are more often than not procompetitive or, alternatively, vertical mergers tend to be more procompetitive or neutral than horizontal ones. Thus, we believe the final VMGs would benefit from language similar to the 1984 VMGs: “Although nonhorizontal mergers are less likely than horizontal mergers to create competitive problems, they are not invariably innocuous.”

Transaction cost efficiencies and merger specificity

The VMGs address efficiencies in Section 8. Under the VMGs, the Agencies will evaluate efficiency claims by the parties using the approach set forth in Section 10 of the 2010 Horizontal Merger Guidelines. Thus, efficiencies must be both cognizable and merger specific to be considered by the agencies.

In general, the VMGs also adopt an approach that is consistent with the teachings of the robust literature on transaction cost economics, which recognizes the costs of using the price system to explain the boundaries of economic organizations, and the importance of incorporating such considerations into any antitrust analyses. In particular, this literature has demonstrated, both theoretically and empirically, that the decision to contract or vertically integrate is often driven by the relatively high costs of contracting as well as concerns regarding the enforcement of contracts and opportunistic behavior. This literature suggests that such transactions cost efficiencies in the vertical merger context often will be both cognizable and merger-specific and rejects an approach that would presume such efficiencies are not merger specific because they can be theoretically achieved via contract.

While we agree with the overall approach set out in the VMGs, we are concerned that the application of Section 8, in practice, without more specificity and guidance, will be carried out in a way that is inconsistent with the approach set out in Section 10 of the 2010 HMGs.

Conclusion

Overall, the agencies deserve credit for highlighting the relevant factors in assessing vertical mergers and for not attempting to be overly aggressive in advancing untested merger assessment tools or theories of harm.

The agencies should seriously consider, however, refinements in a number of critical areas:

  • First, discussion of EDM should be integrated into the larger unilateral effects analysis in Section 5 of the VMGs. 
  • Second, the agencies should eliminate the role of market shares and structural analysis in the VMGs. 
  • Third, the final VMGs should acknowledge that vertical mergers are less likely to generate competitive concerns than horizontal mergers. 
  • Finally, the final VMGs should recognize the importance of transaction cost-based efficiencies. 

We believe incorporating these changes will result in guidelines that are more in conformity with sound economics and the empirical evidence.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Margaret E. Slade (Professor Emeritus, Vancouver School of Economics, The University of British Columbia).]

A revision of the DOJ’s Non-Horizontal Merger Guidelines is long overdue and the Draft Vertical Merger Guidelines (“Guidelines”) takes steps in the right direction. However, the treatment of important issues can be uneven. For example, the discussions of market definition and shares are relatively thorough whereas the discussions of anti-competitive harm and pro-competitive efficiencies are more vague.

Market definition, market shares, and concentration

The Guidelines are correct in deferring to the Horizontal Merger Guidelines for most aspects of market definition, market shares, and market concentration. The relevant sections of the Horizontal Guidelines are not without problems. However, it would make no sense to use different methods and concepts to delineate horizontal markets that are involved in vertical mergers compared to those that are involved in horizontal mergers.  

One aspect of market definition, however, is new: the notion of a related product, which is a product that links the up and downstream firms. Such products might be inputs, distribution systems, or sets of customers. The Guidelines set thresholds of 20% for the related product’s share, as well as the parties’ shares, in the relevant market. 

Those thresholds are, of course, only indicative and mergers can be investigated when markets are smaller. In addition, mergers that fail to meet the share tests need not be challenged. It would therefore be helpful to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa. For example, the EU Vertical Merger Guidelines list circumstances, such as the existence of significant cross-shareholding relationships, the fact that one of the firms is considered to be a maverick, and suspicion that coordination is ongoing, under which mergers that fall into the safety zones are more apt to be investigated.

Elimination of double marginalization and other efficiencies

Although the elimination of double marginalization (EDM) is a pricing externality that does not change unit costs, the Guidelines discuss EDM as the principal `efficiency’ or at least they have more to say about that factor. Furthermore, after discussing EDM, the Guidelines note that the full EDM benefit might not occur if the downstream firm cannot use the product or if the parties are already engaged in contracting. The first factor is obvious and the second implies that the efficiency is not merger specific. In practice, however, antitrust and regulatory policy has tended to apply the EDM argument uncritically, ignoring several key assumptions and issues.

The simple model of EDM relies on a setting in which there are two monopolists, one up and one downstream, each produces a single product, and production is subject to fixed proportions. This model predicts that welfare will increase after a vertical merger. If these assumptions are violated, however, the predictions change (as John Kwoka and I discuss in more detail here). For example, under variable proportions the unintegrated downstream firm can avoid some of the adverse effects of the inflated wholesale price by substituting away from use of that product, and the welfare implications are ambiguous. Moreover, managerial considerations such as independent pricing by divisions can lead to less-than-full elimination of double marginalization.  

With multi-product firms, the integrated firm’s average downstream prices need not fall and can even rise when double marginalization is eliminated. To illustrate, after EDM the products with eliminated margins become relatively more profitable to sell. This gives the integrated firm incentives to divert demand towards those products by increasing the prices of its products for which double marginalization was not eliminated. Moreover, under some circumstances, the integrated downstream price can also rise.

Since violations of the simple model are present in almost all cases, it would be helpful to include a more complete list of factors that cause the simple model — the one that predicts that EDM is always welfare improving — to fail.

Unlike the case of horizontal mergers, with vertical mergers, real productive efficiencies on the supply side are often given less attention. Those efficiencies, which include economies of scope, the ability to coordinate other aspects of the vertical chain such as inventories and distribution, and the expectation of productivity growth due to knowledge transfers, can be important

Moreover, organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms, are usually ignored. Those efficiencies can be difficult to evaluate. Nevertheless, they should not be excluded from consideration on that basis.

Equilibrium effects

On page 4, the Guidelines suggest that merger simulations might be used to quantify unilateral price effects of vertical mergers. However, they have nothing to say about the pitfalls. Unfortunately, compared to horizontal merger simulations, there are many more assumptions that are required to construct vertical simulation models and thus many more places where they can go wrong. In particular, one must decide on the number and identity of the rivals; the related products that are potentially disadvantaged; the geographic markets in which foreclosure or raising rivals’ costs are likely to occur; the timing of moves: whether up and downstream prices are set simultaneously or the upstream firm is a first mover; the link between up and downstream: whether bargaining occurs or the upstream firm makes take-it-or-leave-it offers; and, as I discuss below, the need to evaluate the raising rivals’ costs (RRC) and elimination of double marginalization (EDM) effects simultaneously.

These choices can be crucial in determining model predictions. Indeed, as William Rogerson notes (in an unpublished 2019 draft paper, Modeling and Predicting the Competitive Effects of Vertical Mergers Due to Changes in Bargaining Leverage: The Bargaining Leverage Over Rivals (BLR) Effect), when moves are simultaneous, there is no RRC effect. This is true because, when negotiating over input prices, firms take downstream prices as given. 

On the other hand, bargaining introduces a new competitive effect — the bargaining leverage effect — which arises because, after a vertical merger, the disagreement payoff is higher. Indeed, the merged firm recognizes the increased profit that its downstream integrated division will earn if the input is withheld from the rival. In contrast, the upstream firm’s disagreement payoff is irrelevant when it has all of the bargaining power.

Finally, on page 5, the Guidelines describe something that sounds like a vertical upward pricing pressure (UPP) index, analogous to the GUPPI that has been successfully employed in evaluating horizontal mergers. However, extending the GUPPI to a vertical context is not straightforward

To illustrate, Das Varma and Di Stefano show that a sequential process can be very misleading, where a sequential process consists of first calculating the RRC effect and, if that effect is substantial, evaluating the EDM effect and comparing the two. The problem is that the two effects are not independent of one another. Moreover, when the two are determined simultaneously, compared to the sequential RRC, the equilibrium RRC can increase or decrease and can even change sign (i.e., lowering rival costs).What these considerations mean is that vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading. Furthermore, if a simpler sequential screening process is used, careful consideration must be given to whether the markets of interest satisfy the assumptions under which that process will yield approximately reasonable results.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division), and Philip A. Giordano (Partner, Hughes Hubbard & Reed LLP).

[Kolasky & Giordano: The authors thank Katherine Taylor, an associate at Hughes Hubbard & Reed, for her help in researching this article.]

On January 10, the Department of Justice (DOJ) withdrew the 1984 DOJ Non-Horizontal Merger Guidelines, and, together with the Federal Trade Commission (FTC), released new draft 2020 Vertical Merger Guidelines (“DOJ/FTC draft guidelines”) on which it seeks public comment by February 26.[1] In announcing these new draft guidelines, Makan Delrahim, the Assistant Attorney General for the Antitrust Division, acknowledged that while many vertical mergers are competitively beneficial or neutral, “some vertical transactions can raise serious concern.” He went on to explain that, “The revised draft guidelines are based on new economic understandings and the agencies’ experience over the past several decades and better reflect the agencies’ actual practice in evaluating proposed vertical mergers.” He added that he hoped these new guidelines, once finalized, “will provide more clarity and transparency on how we review vertical transactions.”[2]

While we agree with the DOJ and FTC that the 1984 Non-Horizontal Merger Guidelines are now badly outdated and that a new set of vertical merger guidelines is needed, we question whether the draft guidelines released on January 10, will provide the desired “clarity and transparency.” In our view, the proposed guidelines give insufficient recognition to the wide range of efficiencies that flow from most, if not all, vertical mergers. In addition, the guidelines fail to provide sufficiently clear standards for challenging vertical mergers, thereby leaving too much discretion in the hands of the agencies as to when they will challenge a vertical merger and too much uncertainty for businesses contemplating a vertical merger. 

What is most troubling is that this did not need to be so. In 2008, the European Commission, as part of its merger process reform initiative, issued an excellent set of non-horizontal merger guidelines that adopt basically the same analytical framework as the new draft guidelines for evaluating vertical mergers.[3] The EU guidelines, however, lay out in much more detail the factors the Commission will consider and the standards it will apply in evaluating vertical transactions. That being so, it is difficult to understand why the DOJ and FTC did not propose a set of vertical merger guidelines that more closely mirror those of the European Commission, rather than try to reinvent the wheel with a much less complete set of guidelines.

Rather than making the same mistake ourselves, we will try to summarize the EU vertical mergers and to explain why we believe they are markedly better than the draft guidelines the DOJ and FTC have proposed. We would urge the DOJ and FTC to consider revising their draft guidelines to make them more consistent with the EU vertical merger guidelines. Doing so would, among other things, promote greater convergence between the two jurisdictions, which is very much in the interest of both businesses and consumers in an increasingly global economy.

The principal differences between the draft joint guidelines and the EU vertical merger guidelines

1. Acknowledgement of the key differences between horizontal and vertical mergers

The EU guidelines begin with an acknowledgement that, “Non-horizontal mergers are generally less likely to significantly impede effective competition than horizontal mergers.” As they explain, this is because of two key differences between vertical and horizontal mergers.

  • First, unlike horizontal mergers, vertical mergers “do not entail the loss of direct competition between the merging firms in the same relevant market.”[4] As a result, “the main source of anti-competitive effect in horizontal mergers is absent from vertical and conglomerate mergers.”[5]
  • Second, vertical mergers are more likely than horizontal mergers to provide substantial, merger-specific efficiencies, without any direct reduction in competition. The EU guidelines explain that these efficiencies stem from two main sources, both of which are intrinsic to vertical mergers. The first is that, “Vertical integration may thus provide an increased incentive to seek to decrease prices and increase output because the integrated firm can capture a larger fraction of the benefits.”[6] The second is that, “Integration may also decrease transaction costs and allow for a better co-ordination in terms of product design, the organization of the production process, and the way in which the products are sold.”[7]

The DOJ/FTC draft guidelines do not acknowledge these fundamental differences between horizontal and vertical mergers. The 1984 DOJ non-horizontal guidelines, by contrast, contained an acknowledgement of these differences very similar to that found in the EU guidelines. First, the 1984 guidelines acknowledge that, “By definition, non-horizontal mergers involve firms that do not operate in the same market. It necessarily follows that such mergers produce no immediate change in the level of concentration in any relevant market as defined in Section 2 of these Guidelines.”[8] Second, the 1984 guidelines acknowledge that, “An extensive pattern of vertical integration may constitute evidence that substantial economies are afforded by vertical integration. Therefore, the Department will give relatively more weight to expected efficiencies in determining whether to challenge a vertical merger than in determining whether to challenge a horizontal merger.”[9] Neither of these acknowledgements can be found in the new draft guidelines.

These key differences have also been acknowledged by the courts of appeals for both the Second and D.C. circuits in the agencies’ two most recent litigated vertical mergers challenges: Fruehauf Corp. v. FTC in 1979[10] and United States v. AT&T in 2019.[11] In both cases, the courts held, as the D.C. Circuit explained in AT&T, that because of these differences, the government “cannot use a short cut to establish a presumption of anticompetitive effect through statistics about the change in market concentration” – as it can in a horizontal merger case – “because vertical mergers produce no immediate change in the relevant market share.”[12] Instead, in challenging a vertical merger, “the government must make a ‘fact-specific’ showing that the proposed merger is ‘likely to be anticompetitive’” before the burden shifts to the defendants “to present evidence that the prima facie case ‘inaccurately predicts the relevant transaction’s probable effect on future competition,’ or to ‘sufficiently discredit’ the evidence underlying the prima facie case.”[13]

While the DOJ/FTC draft guidelines acknowledge that a vertical merger may generate efficiencies, they propose that the parties to the merger bear the burden of identifying and substantiating those efficiencies under the same standards applied by the 2010 Horizontal Merger Guidelines. Meeting those standards in the case of a horizontal merger can be very difficult. For that reason, it is important that the DOJ/FTC draft guidelines be revised to make it clear that before the parties to a vertical merger are required to establish efficiencies meeting the horizontal merger guidelines’ evidentiary standard, the agencies must first show that the merger is likely to substantially lessen competition, based on the type of fact-specific evidence the courts required in both Fruehauf and AT&T.

2. Safe harbors

Although they do not refer to it as a “safe harbor,” the DOJ/FTC draft guidelines state that, 

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.[14] 

If we understand this statement correctly, it means that the agencies may challenge a vertical merger in any case where one party has a 20% share in a relevant market and the other party has a 20% or higher share of any “related product,” i.e., any “product or service” that is supplied by the other party to firms in that relevant market. 

By contrast, the EU guidelines state that,

The Commission is unlikely to find concern in non-horizontal mergers . . . where the market share post-merger of the new entity in each of the markets concerned is below 30% . . . and the post-merger HHI is below 2,000.[15] 

Both the EU guidelines and the DOJ/FTC draft guidelines are careful to explain that these statements do not create any “legal presumption” that vertical mergers below these thresholds will not be challenged or that vertical mergers above those thresholds are likely to be challenged.

The EU guidelines are more consistent than the DOJ/FTC draft guidelines both with U.S. case law and with the actual practice of both the DOJ and FTC. It is important to remember that the raising rivals’ costs theory of vertical foreclosure was first developed nearly four decades ago by two young economists, David Scheffman and Steve Salop, as a theory of exclusionary conduct that could be used against dominant firms in place of the more simplistic theories of vertical foreclosure that the courts had previously relied on and which by 1979 had been totally discredited by the Chicago School for the reasons stated by the Second Circuit in Fruehauf.[16] 

As the Second Circuit explained in Fruehauf, it was “unwilling to assume that any vertical foreclosure lessens competition” because 

[a]bsent very high market concentration or some other factor threatening a tangible anticompetitive effect, a vertical merger may simply realign sales patterns, for insofar as the merger forecloses some of the market from the merging firms’ competitors, it may simply free up that much of the market, in which the merging firm’s competitors and the merged firm formerly transacted, for new transactions between the merged firm’s competitors and the merging firm’s competitors.[17] 

Or, as Robert Bork put it more colorfully in The Antitrust Paradox, in criticizing the FTC’s decision in A.G. Spalding & Bros., Inc.,[18]:

We are left to imagine eager suppliers and hungry customers, unable to find each other, forever foreclosed and left languishing. It would appear the commission could have cured this aspect of the situation by throwing an industry social mixer.[19]

Since David Scheffman and Steve Salop first began developing their raising rivals’ cost theory of exclusionary conduct in the early 1980s, gallons of ink have been spilled in legal and economic journals discussing and evaluating that theory.[20] The general consensus of those articles is that while raising rivals’ cost is a plausible theory of exclusionary conduct, proving that a defendant has engaged in such conduct is very difficult in practice. It is even more difficult to predict whether, in evaluating a proposed merger, the merged firm is likely to engage in such conduct at some time in the future. 

Consistent with the Second Circuit’s decision in Fruehauf and with this academic literature, the courts, in deciding cases challenging exclusive dealing arrangements under either a vertical foreclosure theory or a raising rivals’ cost theory, have generally been willing to consider a defendant’s claim that the alleged exclusive dealing arrangements violated section 1 of the Sherman Act only in cases where the defendant had a dominant or near-dominant share of a highly concentrated market — usually meaning a share of 40 percent or more.[21] Likewise, all but one of the vertical mergers challenged by either the FTC or DOJ since 1996 have involved parties that had dominant or near-dominant shares of a highly concentrated market.[22] A majority of these involved mergers that were not purely vertical, but in which there was also a direct horizontal overlap between the two parties.

One of the few exceptions is AT&T/Time Warner, a challenge the DOJ lost in both the district court and the D.C. Circuit.[23] The outcome of that case illustrates the difficulty the agencies face in trying to prove a raising rivals’ cost theory of vertical foreclosure where the merging firms do not have a dominant or near-dominant share in either of the affected markets.

Given these court decisions and the agencies’ historical practice of challenging vertical mergers only between companies with dominant or near-dominant shares in highly concentrated markets, we would urge the DOJ and FTC to consider raising the market share threshold below which it is unlikely to challenge a vertical merger to at least 30 percent, in keeping with the EU guidelines, or to 40 percent in order to make the vertical merger guidelines more consistent with the U.S. case law on exclusive dealing.[24] We would also urge the agencies to consider adding a market concentration HHI threshold of 2,000 or higher, again in keeping with the EU guidelines.

3. Standards for applying a raising rivals’ cost theory of vertical foreclosure

Another way in which the EU guidelines are markedly better than the DOJ/FTC draft guidelines is in explaining the factors taken into consideration in evaluating whether a vertical merger will give the parties both the ability and incentive to raise their rivals’ costs in a way that will enable the merged entity to increase prices to consumers. Most importantly, the EU guidelines distinguish clearly between input foreclosure and customer foreclosure, and devote an entire section to each. For brevity, we will focus only on input foreclosure to show why we believe the more detailed approach the EU guidelines take is preferable to the more cursory discussion in the DOJ/FTC draft guidelines.

In discussing input foreclosure, the EU guidelines correctly distinguish between whether a vertical merger will give the merged firm the ability to raise rivals’ costs in a way that may substantially lessen competition and, if so, whether it will give the merged firm an incentive to do so. These are two quite distinct questions, which the DOJ/FTC draft guidelines unfortunately seem to lump together.

The ability to raise rivals’ costs

The EU guidelines identify four important conditions that must exist for a vertical merger to give the merged firm the ability to raise its rivals’ costs. First, the alleged foreclosure must concern an important input for the downstream product, such as one that represents a significant cost factor relative to the price of the downstream product. Second, the merged entity must have a significant degree of market power in the upstream market. Third, the merged entity must be able, by reducing access to its own upstream products or services, to affect negatively the overall availability of inputs for rivals in the downstream market in terms of price or quality. Fourth, the agency must examine the degree to which the merger may free up capacity of other potential input suppliers. If that capacity becomes available to downstream competitors, the merger may simple realign purchase patterns among competing firms, as the Second Circuit recognized in Fruehauf.

The incentive to foreclose access to inputs: 

The EU guidelines recognize that the incentive to foreclose depends on the degree to which foreclosure would be profitable. In making this determination, the vertically integrated firm will take into account how its supplies of inputs to competitors downstream will affect not only the profits of its upstream division, but also of its downstream division. Essentially, the merged entity faces a trade-off between the profit lost in the upstream market due to a reduction of input sales to (actual or potential) rivals and the profit gained from expanding sales downstream or, as the case may be, raising prices to consumers. This trade-off is likely to depend on the margins the merged entity obtains on upstream and downstream sales. Other things constant, the lower the margins upstream, the lower the loss from restricting input sales. Similarly, the higher the downstream margins, the higher the profit gain from increasing market share downstream at the expense of foreclosed rivals.

The EU guidelines recognize that the incentive for the integrated firm to raise rivals’ costs further depends on the extent to which downstream demand is likely to be diverted away from foreclosed rivals and the share of that diverted demand the downstream division of the integrated firm can capture. This share will normally be higher the less capacity constrained the merged entity will be relative to non-foreclosed downstream rivals and the more the products of the merged entity and foreclosed competitors are close substitutes. The effect on downstream demand will also be higher if the affected input represents a significant proportion of downstream rivals’ costs or if it otherwise represents a critical component of the downstream product.

The EU guidelines recognize that the incentive to foreclose actual or potential rivals may also depend on the extent to which the downstream division of the integrated firm can be expected to benefit from higher price levels downstream as a result of a strategy to raise rivals’ costs. The greater the market shares of the merged entity downstream, the greater the base of sales on which to enjoy increased margins. However, an upstream monopolist that is already able to fully extract all available profits in vertically related markets may not have any incentive to foreclose rivals following a vertical merger. Therefore, the ability to extract available profits from consumers does not follow immediately from a very high market share; to come to that conclusion requires a more thorough analysis of the actual and future constraints under which the monopolist operates.

Finally, the EU guidelines require the Commission to examine not only the incentives to adopt such conduct, but also the factors liable to reduce, or even eliminate, those incentives, including the possibility that the conduct is unlawful. In this regard, the Commission will consider, on the basis of a summary analysis: (i) the likelihood that this conduct would be clearly be unlawful under Community law, (ii) the likelihood that this illegal conduct could be detected, and (iii) the penalties that could be imposed.

Overall likely impact on effective competition: 

Finally, the EU guidelines recognize that a vertical merger will raise foreclosure concerns only when it would lead to increased prices in the downstream market. This normally requires that the foreclosed suppliers play a sufficiently important role in the competitive process in the downstream market. In general, the higher the proportion of rivals that would be foreclosed in the downstream market, the more likely the merger can be expected to result in a significant price increase in the downstream market and, therefore, to significantly impede effective competition. 

In making these determinations, the Commission must under the EU guidelines also assess the extent to which a vertical merger may raise barriers to entry, a criterion that is also found in the 1984 DOJ non-horizontal merger guidelines but is strangely missing from the DOJ/FTC draft guidelines. As the 1984 guidelines recognize, a vertical merger can raise entry barriers if the anticipated input foreclosure would create a need to enter at both the downstream and the upstream level in order to compete effectively in either market.

* * * * *

Rather than issue a set of incomplete vertical merger guidelines, we would urge the DOJ and FTC to follow the lead of the European Commission and develop a set of guidelines setting out in more detail the factors the agencies will consider and the standards they will use in evaluating vertical mergers. The EU non-horizontal merger guidelines provide an excellent model for doing so.


[1] U.S. Department of Justice & Federal Trade Commission, Draft Vertical Merger Guidelines, available at https://www.justice.gov/opa/press-release/file/1233741/download (hereinafter cited as “DOJ/FTC draft guidelines”).

[2] U.S. Department of Justice, Office of Public Affairs, “DOJ and FTC Announce Draft Vertical Merger Guidelines for Public Comment,” Jan. 10, 2020, available at https://www.justice.gov/opa/pr/doj-and-ftc-announce-draft-vertical-merger-guidelines-public-comment.

[3] See European Commission, Guidelines on the assessment of non-horizontal mergers under the Council Regulation on the control of concentrations between undertakings (2008) (hereinafter cited as “EU guidelines”), available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52008XC1018(03)&from=EN.

[4] Id. at § 12.

[5] Id.

[6] Id. at § 13.

[7] Id. at § 14. The insight that transactions costs are an explanation for both horizontal and vertical integration in firms first occurred to Ronald Coase in 1932, while he was a student at the London School of Economics. See Ronald H. Coase, Essays on Economics and Economists 7 (1994). Coase took five years to flesh out his initial insight, which he then published in 1937 in a now-famous article, The Nature of the Firm. See Ronald H. Coase, The Nature of the Firm, Economica 4 (1937). The implications of transactions costs for antitrust analysis were explained in more detail four decades later by Oliver Williamson in a book he published in 1975. See Oliver E. William, Markets and Hierarchies: Analysis and Antitrust Implications (1975) (explaining how vertical integration, either by ownership or contract, can, for example, protect a firm from free riding and other opportunistic behavior by its suppliers and customers). Both Coase and Williamson later received Nobel Prizes for Economics for their work recognizing the importance of transactions costs, not only in explaining the structure of firms, but in other areas of the economy as well. See, e.g., Ronald H. Coase, The Problem of Social Cost, J. Law & Econ. 3 (1960) (using transactions costs to explain the need for governmental action to force entities to internalize the costs their conduct imposes on others).

[8] U.S. Department of Justice, Antitrust Division, 1984 Merger Guidelines, § 4, available at https://www.justice.gov/archives/atr/1984-merger-guidelines.

[9] EU guidelines, at § 4.24.

[10] Fruehauf Corp. v. FTC, 603 F.2d 345 (2d Cir. 1979).

[11] United States v. AT&T, Inc., 916 F.2d 1029 (D.C. Cir. 2019).

[12] Id. at 1032; accord, Fruehauf, 603 F.2d, at 351 (“A vertical merger, unlike a horizontal one, does not eliminate a competing buyer or seller from the market . . . . It does not, therefore, automatically have an anticompetitive effect.”) (emphasis in original) (internal citations omitted).

[13] AT&T, 419 F.2d, at 1032 (internal citations omitted).

[14] DOJ/FTC draft guidelines, at 3.

[15] EU guidelines, at § 25.

[16] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73 AM. ECON. REV. 267 (1983).

[17] Fruehauf, supra note11, 603 F.2d at 353 n.9 (emphasis added).

[18] 56 F.T.C. 1125 (1960).

[19] Robert H. Bork, The Antitrust Paradox: A Policy at War with Itself 232 (1978).

[20] See, e.g., Alan J. Meese, Exclusive Dealing, the Theory of the Firm, and Raising Rivals’ Costs: Toward a New Synthesis, 50 Antitrust Bull., 371 (2005); David T. Scheffman and Richard S. Higgins, Twenty Years of Raising Rivals Costs: History, Assessment, and Future, 12 George Mason L. Rev.371 (2003); David Reiffen & Michael Vita, Comment: Is There New Thinking on Vertical Mergers, 63 Antitrust L.J. 917 (1995); Thomas G. Krattenmaker & Steven Salop, Anticompetitive Exclusion: Raising Rivals’ Costs to Achieve Power Over Price, 96 Yale L. J. 209, 219-25 (1986).

[21] See, e.g., United States v. Microsoft, 87 F. Supp. 2d 30, 50-53 (D.D.C. 1999) (summarizing law on exclusive dealing under section 1 of the Sherman Act); id. at 52 (concluding that modern case law requires finding that exclusive dealing contracts foreclose rivals from 40% of the marketplace); Omega Envtl, Inc. v. Gilbarco, Inc., 127 F.3d 1157, 1162-63 (9th Cir. 1997) (finding 38% foreclosure insufficient to make out prima facie case that exclusive dealing agreement violated the Sherman and Clayton Acts, at least where there appeared to be alternate channels of distribution).

[22] See, e.g., United States, et al. v. Comcast, 1:11-cv-00106 (D.D.C. Jan. 18, 2011) (Comcast had over 50% of MVPD market), available at https://www.justice.gov/atr/case-document/competitive-impact-statement-72; United States v. Premdor, Civil No.: 1-01696 (GK) (D.D.C. Aug. 3, 2002) (Masonite manufactured more than 50% of all doorskins sold in the U.S.; Premdor sold 40% of all molded doors made in the U.S.), available at https://www.justice.gov/atr/case-document/final-judgment-151.

[23] See United States v. AT&T, Inc., 916 F.2d 1029 (D.C. Cir. 2019).

[24] See Brown Shoe Co. v. United States, 370 U.S. 294, (1962) (relying on earlier Supreme Court decisions involving exclusive dealing and tying claims under section 3 of the Clayton Act for guidance as to what share of a market must be foreclosed before a vertical merger can be found unlawful under section 7).