Faithful and even occasional readers of this roundup might have noticed a certain temporal discontinuity between the last post and this one. The inimitable Gus Hurwitz has passed the scrivener’s pen to me, a recent refugee from the Federal Trade Commission (FTC), and the roundup is back in business. Any errors going forward are mine. Going back, blame Gus.
Commissioner Noah Phillips departed the FTC last Friday, leaving the Commission down a much-needed advocate for consumer welfare and the antitrust laws as they are, if not as some wish they were. I recommend the reflections posted by Commissioner Christine S. Wilson and my fellow former FTC Attorney Advisor Alex Okuliar. Phillips collaborated with his fellow commissioners on matters grounded in the law and evidence, but he wasn’t shy about crying frolic and detour when appropriate.
The FTC without Noah is a lesser place. Still, while it’s not always obvious, many able people remain at the Commission and some good solid work continues. For example, FTC staff filed comments urging New York State to reject a Certificate of Public Advantage (“COPA”) application submitted by SUNY Upstate Health System and Crouse Medical. The staff’s thorough comments reflect investigation of the proposed merger, recent research, and the FTC’s long experience with COPAs. In brief, the staff identified anticompetitive rent-seeking for what it is. Antitrust exemptions for health-care providers tend to make health care worse, but more expensive. Which is a corollary to the evergreen truth that antitrust exemptions help the special interests receiving them but not a living soul besides those special interests. That’s it, full stop.
More Good News from the Commission
On Sept. 30, a unanimous Commission announced that an independent physician association in New Mexico had settled allegations that it violated a 2005 consent order. The allegations? Roughly 400 physicians—independent competitors—had engaged in price fixing, violating both the 2005 order and the Sherman Act. As the concurring statement of Commissioners Phillips and Wilson put it, the new order “will prevent a group of doctors from allegedly getting together to negotiate… higher incomes for themselves and higher costs for their patients.” Oddly, some have chastised the FTC for bringing the action as anti-labor. But the IPA is a regional “must-have” for health plans and a dominant provider to consumers, including patients, who might face tighter budget constraints than the median physician
Now comes October and an amended complaint. The amended complaint is even weaker than the opening salvo. Now, the FTC alleges that the acquisition would eliminate potential competition from Meta in a narrower market, VR-dedicated fitness apps, by “eliminating any probability that Meta would enter the market through alternative means absent the Proposed Acquisition, as well as eliminating the likely and actual beneficial influence on existing competition that results from Meta’s current position, poised on the edge of the market.”
So what if Meta were to abandon the deal—as the FTC wants—but not enter on its own? Same effect, but the FTC cannot seriously suggest that Meta has a positive duty to enter the market. Is there a jurisdiction (or a planet) where a decision to delay or abandon entry would be unlawful unilateral conduct? Suppose instead that Meta enters, with virtual-exercise guns blazing, much to the consternation of firms actually in the market, which might complain about it. Then what? Would the Commission cheer or would it allege harm to nascent competition, or perhaps a novel vertical theory? And by the way, how poised is Meta, given no competing product in late-stage development? Would the FTC prefer that Meta buy a different competitor? Should the overworked staff commence Meta’s due diligence?
Potential competition cases are viable given the right facts, and in areas where good grounds to predict significant entry are well-established. But this is a nascent market in a large, highly dynamic, and innovative industry. The competitive landscape a few years down the road is anyone’s guess. More speculation: the staff was right all along. For more, see Dirk Auer’s or Geoffrey Manne’s threads on the amended complaint.
When It Rains It Pours Regulations
On Aug. 22, the FTC published an advance notice of proposed rulemaking (ANPR) to consider the potential regulation of “commercial surveillance and data security” under its Section 18 authority. Shortly thereafter, they announced an Oct. 20 open meeting with three more ANPRs on the agenda.
First, on the advance notice: I’m not sure what they mean by “commercial surveillance.” The term doesn’t appear in statutory law, or in prior FTC enforcement actions. It sounds sinister and, surely, it’s an intentional nod to Shoshana Zuboff’s anti-tech polemic “The Age of Surveillance Capitalism.” One thing is plain enough: the proffered definition is as dramatically sweeping as it is hopelessly vague. The Commission seems to be contemplating a general data regulation of some sort, but we don’t know what sort. They don’t say or even sketch a possible rule. That’s a problem for the FTC, because the law demands that the Commission state its regulatory objectives, along with regulatory alternatives under consideration, in the ANPR itself. If they get to an NPRM, they are required to describe a proposed rule with specificity.
What’s clear is that the ANPR takes a dim view of much of the digital economy. And while the Commission has considerable experience in certain sorts of privacy and data security matters, the ANPR hints at a project extending well past that experience. Commissioners Phillips and Wilson dissented for good and overlapping reasons. Here’s a bit from the Phillips dissent:
When adopting regulations, clarity is a virtue. But the only thing clear in the ANPR is a rather dystopic view of modern commerce….I cannot support an ANPR that is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate….It’s a naked power grab.
Be sure to read the bonus material in the Federal Register—supporting statements from Chair Lina Khan and Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya, and dissenting statements from Commissioners Phillips and Wilson. Chair Khan breezily states that “the questions we ask in the ANPR and the rules we are empowered to issue may be consequential, but they do not implicate the ‘major questions doctrine.’” She’s probably half right: the questions do not violate the Constitution. But she’s probably half wrong too.
But wait, there’s more! There were three additional ANPRs on the Commission’s Oct. 20 agenda. So that’s four and counting. Will there be a proposed rule on non-competes? Gig workers? Stay tuned. For now, note that rules are not self-enforcing, and that the chair has testified to Congress that the Commission is strapped for resources and struggling to keep up with its statutory mission. Are more regulations an odd way to ask Congress for money? Thus far, there’s no proposed rule on gig workers, but there was a Policy Statement on Enforcement Related to Gig Workers.. For more on that story, see Alden Abbott’s TOTM post.
Laws, Like People, Have Their Limits
Read Phillips’s parting dissent in Passport Auto Group, where the Commission combined legitimate allegations with an unhealthy dose of overreach:
The language of the unfairness standard has given the FTC the flexibility to combat new threats to consumers that accompany the development of new industries and technologies. Still, there are limits to the Commission’s unfairness authority. Because this complaint includes an unfairness count that aims to transform Section 5 into an undefined discrimination statute, I respectfully dissent.”
Right. Three cheers for effective enforcement of the focused antidiscrimination laws enacted by Congress by the agencies actually charged to enforce those laws. And to equal protection. And three more, at least, for a little regulatory humility, if we find it.
Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.
But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.
This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.
Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.
Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.
Bees
Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.
The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:
[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.
If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.
It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?
The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.
Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research:
Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.
But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:
Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.
In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.
The Lighthouse
Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.
Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:
Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.
He added that:
[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.
More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.
What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:
[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.
In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.
Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:
The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.
Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.
Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:
Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?
However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:
[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.
Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.
Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.
The Tragedy of the Commons
Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.
The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:
The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.
In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.
Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:
The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.
As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.
Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.
These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:
Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:
Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.
In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?
More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:
The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.
Dvorak Keyboards
In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.
The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:
Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]
Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.
Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:
Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.
In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.
Killzones, Zoom, and TikTok
If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.
For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:
If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.
Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.
And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).
But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.
Zoom is one of the most salient instances. As I have written previously:
To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.
Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.
More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.
While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.
In Conclusion
My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.
In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.
For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.
Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.
Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.
All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.
This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.
The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:
This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].
Amazingly enough, at a time when legislative proposals for new antitrust restrictions are rapidly multiplying—see the Competition and Antitrust Law Enforcement Reform Act (CALERA), for example—Congress simultaneously is seriously considering granting antitrust immunity to a price-fixing cartel among members of the newsmedia. This would thereby authorize what the late Justice Antonin Scalia termed “the supreme evil of antitrust: collusion.” What accounts for this bizarre development?
Discussion
The antitrust exemption in question, embodied in the Journalism Competition and Preservation Act of 2021, was introduced March 10 simultaneously in the U.S. House and Senate. The press release announcing the bill’s introduction portrayed it as a “good government” effort to help struggling newspapers in their negotiations with large digital platforms, and thereby strengthen American democracy:
We must enable news organizations to negotiate on a level playing field with the big tech companies if we want to preserve a strong and independent press[.] …
A strong, diverse, free press is critical for any successful democracy. …
Nearly 90 percent of Americans now get news while on a smartphone, computer, or tablet, according to a Pew Research Center survey conducted last year, dwarfing the number of Americans who get news via television, radio, or print media. Facebook and Google now account for the vast majority of online referrals to news sources, with the two companies also enjoying control of a majority of the online advertising market. This digital ad duopoly has directly contributed to layoffs and consolidation in the news industry, particularly for local news.
This legislation would address this imbalance by providing a safe harbor from antitrust laws so publishers can band together to negotiate with large platforms. It provides a 48-month window for companies to negotiate fair terms that would flow subscription and advertising dollars back to publishers, while protecting and preserving Americans’ right to access quality news. These negotiations would strictly benefit Americans and news publishers at-large; not just one or a few publishers.
The Journalism Competition and Preservation Act only allows coordination by news publishers if it (1) directly relates to the quality, accuracy, attribution or branding, and interoperability of news; (2) benefits the entire industry, rather than just a few publishers, and are non-discriminatory to other news publishers; and (3) is directly related to and reasonably necessary for these negotiations.
Lurking behind this public-spirited rhetoric, however, is the specter of special interest rent seeking by powerful media groups, as discussed in an insightful article by Thom Lambert. The newspaper industry is indeed struggling, but that is true overseas as well as in the United States. Competition from internet websites has greatly reduced revenues from classified and non-classified advertising. As Lambert notes, in “light of the challenges the internet has created for their advertising-focused funding model, newspapers have sought to employ the government’s coercive power to increase their revenues.”
In particular, media groups have successfully lobbied various foreign governments to impose rules requiring that Google and Facebook pay newspapers licensing fees to display content. The Australian government went even further by mandating that digital platforms share their advertising revenue with news publishers and give the publishers advance notice of any algorithm changes that could affect page rankings and displays. Media rent-seeking efforts took a different form in the United States, as Lambert explains (citations omitted):
In the United States, news publishers have sought to extract rents from digital platforms by lobbying for an exemption from the antitrust laws. Their efforts culminated in the introduction of the Journalism Competition and Preservation Act of 2018. According to a press release announcing the bill, it would allow “small publishers to band together to negotiate with dominant online platforms to improve the access to and the quality of news online.” In reality, the bill would create a four-year safe harbor for “any print or digital news organization” to jointly negotiate terms of trade with Google and Facebook. It would not apply merely to “small publishers” but would instead immunize collusive conduct by such major conglomerates as Murdoch’s News Corporation, the Walt Disney Corporation, the New York Times, Gannet Company, Bloomberg, Viacom, AT&T, and the Fox Corporation. The bill would permit news organizations to fix prices charged to digital platforms as long as negotiations with the platforms were not limited to price, were not discriminatory toward similarly situated news organizations, and somehow related to “the quality, accuracy, attribution or branding, and interoperability of news.” Given the ease of meeting that test—since news organizations could always claim that higher payments were necessary to ensure journalistic quality—the bill would enable news publishers in the United States to extract rents via collusion rather than via direct government coercion, as in Australia.
The 2021 version of the JCPA is nearly identical to the 2018 version discussed by Thom. The only substantive change is that the 2021 version strengthens the pro-cartel coalition by adding broadcasters (it applies to “any print, broadcast, or news organization”). While the JCPA plainly targets Facebook and Google (“online content distributors” with “not fewer than 1,000,000,000 monthly active users, in the aggregate, on its website”), Microsoft President Brad Smith noted in a March 12 House Antitrust Subcommittee Hearing on the bill that his company would also come under its collective-bargaining terms. Other online distributors could eventually become subject to the proposed law as well.
Purported justifications for the proposal were skillfully skewered by John Yun in a 2019 article on the substantively identical 2018 JCPA. Yun makes several salient points. First, the bill clearly shields price fixing. Second, the claim that all news organizations (in particular, small newspapers) would receive the same benefit from the bill rings hollow. The bill’s requirement that negotiations be “nondiscriminatory as to similarly situated news content creators” (emphasis added) would allow the cartel to negotiate different terms of trade for different “tiers” of organizations. Thus The New York Times and The Washington Post, say, might be part of a top tier getting the most favorable terms of trade. Third, the evidence does not support the assertion that Facebook and Google are monopolistic gateways for news outlets.
Yun concludes by summarizing the case against this legislation (citations omitted):
Put simply, the impact of the bill is to legalize a media cartel. The bill expressly allows the cartel to fix the price and set the terms of trade for all market participants. The clear goal is to transfer surplus from online platforms to news organizations, which will likely result in higher content costs for these platforms, as well as provisions that will stifle the ability to innovate. In turn, this could negatively impact quality for the users of these platforms.
Furthermore, a stated goal of the bill is to promote “quality” news and to “highlight trusted brands.” These are usually antitrust code words for favoring one group, e.g., those that are part of the News Media Alliance, while foreclosing others who are not “similarly situated.” What about the non-discrimination clause? Will it protect non-members from foreclosure? Again, a careful reading of the bill raises serious questions as to whether it will actually offer protection. The bill only ensures that the terms of the negotiations are available to all “similarly situated” news organizations. It is very easy to carve out provisions that would favor top tier members of the media cartel.
Additionally, an unintended consequence of antitrust exemptions can be that it makes the beneficiaries lax by insulating them from market competition and, ultimately, can harm the industry by delaying inevitable and difficult, but necessary, choices. There is evidence that this is what occurred with the Newspaper Preservation Act of 1970, which provided antitrust exemption to geographically proximate newspapers for joint operations.
There are very good reasons why antitrust jurisprudence reserves per se condemnation to the most egregious anticompetitive acts including the formation of cartels. Legislative attempts to circumvent the federal antitrust laws should be reserved solely for the most compelling justifications. There is little evidence that this level of justification has been met in this present circumstance.
Conclusion
Statutory exemptions to the antitrust laws have long been disfavored, and with good reason. As I explained in my 2005 testimony before the Antitrust Modernization Commission, such exemptions tend to foster welfare-reducing output restrictions. Also, empirical research suggests that industries sheltered from competition perform less well than those subject to competitive forces. In short, both economic theory and real-world data support a standard that requires proponents of an exemption to bear the burden of demonstrating that the exemption will benefit consumers.
This conclusion applies most strongly when an exemption would specifically authorize hard-core price fixing, as in the case with the JCPA. What’s more, the bill’s proponents have not borne the burden of justifying their pro-cartel proposal in economic welfare terms—quite the opposite. Lambert’s analysis exposes this legislation as the product of special interest rent seeking that has nothing to do with consumer welfare. And Yun’s evaluation of the bill clarifies that, not only would the JCPA foster harmful collusive pricing, but it would also harm its beneficiaries by allowing them to avoid taking steps to modernize and render themselves more efficient competitors.
In sum, though the JCPA claims to fly a “public interest” flag, it is just another private interest bill promoted by well-organized rent seekers would harm consumer welfare and undermine innovation.
The U.S. Supreme Court will hear a challenge next month to the 9th U.S. Circuit Court of Appeals’ 2020 decision in NCAA v. Alston. Alston affirmed a district court decision that enjoined the National Collegiate Athletic Association (NCAA) from enforcing rules that restrict the education-related benefits its member institutions may offer students who play Football Bowl Subdivision football and Division I basketball.
This will be the first Supreme Court review of NCAA practices since NCAA v. Board of Regents in 1984, which applied the antitrust rule of reason in striking down the NCAA’s “artificial limit” on the quantity of televised college football games, but also recognized that “this case involves an industry in which horizontal restraints on competition are essential if the product [intercollegiate athletic contests] is to be available at all.” Significantly, in commenting on the nature of appropriate, competition-enhancing NCAA restrictions, the court in Board of Regents stated that:
[I]n order to preserve the character and quality of the [NCAA] ‘product,’ athletes must not be paid, must be required to attend class, and the like. And the integrity of the ‘product’ cannot be preserved except by mutual agreement; if an institution adopted such restrictions unilaterally, its effectiveness as a competitor on the playing field might soon be destroyed. Thus, the NCAA plays a vital role in enabling college football to preserve its character, and as a result enables a product to be marketed which might otherwise be unavailable. In performing this role, its actions widen consumer choice – not only the choices available to sports fans but also those available to athletes – and hence can be viewed as procompetitive. [footnote citation omitted]
One’s view of the Alston case may be shaped by one’s priors regarding the true nature of the NCAA. Is the NCAA a benevolent Dr. Jekyll, which seeks to promote amateurism and fairness in college sports to the benefit of student athletes and the general public? Or is its benevolent façade a charade? Although perhaps a force for good in its early years, has the NCAA transformed itself into an evil Mr. Hyde, using restrictive rules to maintain welfare-inimical monopoly power as a seller cartel of athletic events and a monopsony employer cartel that suppresses athletes’ wages? I will return to this question—and its bearing on the appropriate resolution of this legal dispute—after addressing key contentions by both sides in Alston.
Summarizing the Arguments in NCAA v Alston
The Alston class-action case followed in the wake of the 9th Circuit’s decision in O’Bannon v. NCAA(2015). O’Bannon affirmed in large part a district court’s ruling that the NCAA illegally restrained trade, in violation of Section 1 of the Sherman Act, by preventing football and men’s basketball players from receiving compensation for the use of their names, images, and likenesses. It also affirmed the district court’s injunction insofar as it required the NCAA to implement the less restrictive alternative of permitting athletic scholarships for the full cost of attendance. (I commented approvingly on the 9th Circuit’s decision in a previous TOTM post.)
Subsequent antitrust actions by student-athletes were consolidated in the district court. After a bench trial, the district court entered judgment for the student-athletes, concluding in part that NCAA limits on education-related benefits were unreasonable restraints of trade. It enjoined those limits but declined to hold that other NCAA limits on compensation unrelated to education likewise violated Section 1.
In May 2020, a 9th Circuit panel held that the district court properly applied the three-step Sherman Act Section 1 rule of reason analysis in determining that the enjoined rules were unlawful restraints of trade.
First, the panel concluded that the student-athletes carried their burden at step one by showing that the restraints produced significant anticompetitive effects within the relevant market for student-athletes’ labor.
At step two, the NCAA was required to come forward with evidence of the restraints’ procompetitive effects. The panel endorsed the district court’s conclusion that only some of the challenged NCAA rules served the procompetitive purpose of preserving amateurism and thus improving consumer choice by maintaining a distinction between college and professional sports. Those rules were limits on above-cost-of-attendance payments unrelated to education, the cost-of-attendance cap on athletic scholarships, and certain restrictions on cash academic or graduation awards and incentives. The panel affirmed the district court’s conclusion that the remaining rules—restricting non-cash education-related benefits—did nothing to foster or preserve consumer demand. The panel held that the record amply supported the findings of the district court, which relied on demand analysis, survey evidence, and NCAA testimony.
The panel also affirmed the district court’s conclusion that, at step three, the student-athletes showed that any legitimate objectives could be achieved in a substantially less restrictive manner. The district court identified a less restrictive alternative of prohibiting the NCAA from capping certain education-related benefits and limiting academic or graduation awards or incentives below the maximum amount that an individual athlete may receive in athletic participation awards, while permitting individual conferences to set limits on education-related benefits. The panel held that the district court did not clearly err in determining that this alternative would be virtually as effective in serving the procompetitive purposes of the NCAA’s current rules and could be implemented without significantly increased cost.
Finally, the panel held that the district court’s injunction was not impermissibly vague and did not usurp the NCAA’s role as the superintendent of college sports. The panel also declined to broaden the injunction to include all NCAA compensation limits, including those on payments untethered to education. The panel concluded that the district court struck the right balance in crafting a remedy that both prevented anticompetitive harm to student-athletes while serving the procompetitive purpose of preserving the popularity of college sports.
The NCAA appealed to the Supreme Court, which granted the NCAA’s petition for certiorari Dec. 16, 2020. The NCAA contends that under Board of Regents, the NCAA rules regarding student-athlete compensation are reasonably related to preserving amateurism in college sports, are procompetitive, and should have been upheld after a short deferential review, rather than the full three-step rule of reason. According to the NCAA’s petition for certiorari, even under the detailed rule of reason, the 9th Circuit’s decision was defective. Specifically:
The Ninth Circuit … relieved plaintiffs of their burden to prove that the challenged rules unreasonably restrain trade, instead placing a “heavy burden” on the NCAA … to prove that each category of its rules is procompetitive and that an alternative compensation regime created by the district court could not preserve the procompetitive distinction between college and professional sports. That alternative regime—under which the NCAA must permit student-athletes to receive unlimited “education-related benefits,” including post-eligibility internships that pay unlimited amounts in cash and can be used for recruiting or retention—will vitiate the distinction between college and professional sports. And via the permanent injunction the Ninth Circuit upheld, the alternative regime will also effectively make a single judge in California the superintendent of a significant component of college sports. The Ninth Circuit’s approval of this judicial micromanagement of the NCAA denies the NCAA the latitude this Court has said it needs, and endorses unduly stringent scrutiny of agreements that define the central features of sports leagues’ and other joint ventures’ products. The decision thus twists the rule of reason into a tool to punish (and thereby deter) procompetitive activity.
Two amicus briefs support the NCAA’s position. One, filed on behalf of “antitrust law and business school professors,” stresses that the 9th Circuit’s decision misapplied the third step of the rule of reason by requiring defendants to show that their conduct was the least restrictive means available (instead of requiring plaintiff to prove the existence of an equally effective but less restrictive rule). More broadly:
[This approach] permits antitrust plaintiffs to commandeer the judiciary and use it to regulate and modify routine business conduct, so long as that conduct is not the least restrictive conduct imaginable by a plaintiff’s attorney or district judge. In turn, the risk that procompetitive ventures may be deemed unlawful and subject to treble damages liability simply because they could have operated in a marginally less restrictive manner is likely to chill beneficial business conduct.
A second brief, filed on behalf of “antitrust economists,” emphasizes that the NCAA has adapted the rules governing design of its product (college amateur sports) over time to meet consumer demand and to prevent colleges from pursuing their own interests (such as “pay to play”) in ways that would conflict with the overall procompetitive aims of the collaboration. While acknowledging that antitrust courts are free to scrutinize collaborations’ rules that go beyond the design of the product itself (such as the NCAA’s broadcast restrictions), the brief cites key Supreme Court decisions (NCAA v. Board of Regents and Texaco Inc. v.Dagher), for the proposition that courts should stay out of restrictions on the core activity of the joint venture itself. It then summarizes the policy justification for such judicial non-interference:
Permitting judges and juries to apply the Sherman Act to such decisions [regarding core joint venture activity] will inevitably create uncertainty that undermines innovation and investment incentives across any number of industries and collaborative ventures. In these circumstances, antitrust courts would be making public policy regarding the desirability of a product with particular features, as opposed to ferreting out agreements or unilateral conduct that restricts output, raises prices, or reduces innovation to the detriment of consumers.
In their brief opposing certiorari, counsel for Alston take the position that, in reality, the NCAA is seeking a special antitrust exemption for its competitively restrictive conduct—an issue that should be determined by Congress, not courts. Their brief notes that the concept of “amateurism” has changed over the years and that some increases in athletes’ compensation have been allowed over time. Thus, in the context of big-time college football and basketball:
[A]mateurism is little more than a pretext. It is certainly not a Sherman Act concept, much less a get-out-of-jail-free card that insulates any particular set of NCAA restraints from scrutiny.
Who Has the Better Case?
The NCAA’s position is a strong one. Association rules touching on compensation for college athletes are part of the core nature of the NCAA’s “amateur sports” product, as the Supreme Court stated (albeit in dictum) in Board of Regents. Furthermore, subsequent Supreme Court jurisprudence (see 2010’s American Needle Inc. v. NFL) has eschewed second-guessing of joint-venture product design decisions—which, in the case of the NCAA, involve formulating the restrictions (such as whether and how to compensate athletes) that are deemed key to defining amateurism.
The Alston amicus curiae briefs ably set forth the strong policy considerations that support this approach, centered on preserving incentives for the development of efficient welfare-generating joint ventures. Requiring joint venturers to provide “least restrictive means” justifications for design decisions discourages innovative activity and generates costly uncertainty for joint-venture planners, to the detriment of producers and consumers (who benefit from joint-venture innovations) alike. Claims by defendant Alston that the NCAA is in effect seeking to obtain a judicial antitrust exemption miss the mark; rather, the NCAA merely appears to be arguing that antitrust should be limited to evaluating restrictions that fall outside the scope of the association’s core mission. Significantly, as discussed in the NCAA’s brief petitioning for certiorari, other federal courts of appeals decisions in the 3rd, 5th, and 7th Circuits have treated NCAA bylaws going to the definition of amateurism in college sports as presumptively procompetitive and not subject to close scrutiny. Thus, based on the arguments set forth by litigants, a Supreme Court victory for the NCAA in Alston would appear sound as a matter of law and economics.
There may, however, be a catch. Some popular commentary has portrayed the NCAA as a malign organization that benefits affluent universities (and their well-compensated coaches) while allowing member colleges to exploit athletes by denying them fair pay—in effect, an institutional Mr. Hyde.
What’s more, consistent with the Mr. Hyde story, a number of major free-market economists (including, among others, Nobel laureate Gary Becker) have portrayed the NCAA as an anticompetitive monopsony employer cartel that has suppressed the labor market demand for student athletes, thereby limiting their wages, fringe benefits, and employment opportunities. (In a similar vein, the NCAA is seen as a monopolist seller cartel in the market for athletic events.) Consistent with this perspective, promoting the public good of amateurism (the Dr. Jekyll story) is merely a pretextual façade (a cover story, if you will) for welfare-inimical naked cartel conduct. If one buys this alternative story, all core product restrictions adopted by the NCAA should be fair game for close antitrust scrutiny—and thus, the 9th Circuit’s decision in Alston merits affirmation as a matter of antitrust policy.
There is, however, a persuasive response to the cartel story, set forth in Richard McKenzie and Dwight Lee’s essay “The NCAA: A Case Study of the Misuse of the Monopsony and Monopoly Models” (Chapter 8 of their 2008 book “In Defense of Monopoly: How Market Power Fosters Creative Production”). McKenzie and Lee examine the evidence bearing on economists’ monopsony cartel assertions (and, in particular, the evidence presented in a 1992 study by Arthur Fleischer, Brian Goff, and Richard Tollison) and find it wanting:
Our analysis leads inexorably to the conclusion that the conventional economic wisdom regarding the intent and consequences of NCAA restrictions is hardly as solid, on conceptual grounds, as the NCAA critics assert, often without citing relevant court cases. We have argued that the conventional wisdom is wrong in suggesting that, as a general proposition,
• college athletes are materially “underpaid” and are “exploited”;
• cheating on NCAA rules is prima facie evidence of a cartel intending to restrict employment and suppress athletes’ wages;
• barriers to entry ensure the continuance of the NCAA’s monopsony powers over athletes.
No such entry barriers (other than normal organizational costs, which need to be covered to meet any known efficiency test for new entrants) exist. In addition, the Supreme Court’s decision in NCAA indicates that the NCAA would be unable to prevent through the courts the emergence of competing athletic associations. The actual existence of other athletic associations indicates that entry would be not only possible but also practical if athletes’ wages were materially suppressed.
Conventional economic analysis of NCAA rules that we have challenged also is misleading in suggesting that collegiate sports would necessarily be improved if the NCAA were denied the authority to regulate the payment of athletes. Given the absence of legal barriers to entry into the athletic association market, it appears that if athletes’ wages were materially suppressed (or as grossly suppressed as the critics claim), alternative sports associations would form or expand, and the NCAA would be unable to maintain its presumed monopsony market position. The incentive for colleges and universities to break with the NCAA would be overwhelming.
From our interpretation of NCAA rules, it does not follow necessarily that athletes should not receive any more compensation than they do currently. Clearly, market conditions change, and NCAA rules often must be adjusted to accommodate those changes. In the absence of entry barriers, we can expect the NCAA to adjust, as it has adjusted, in a competitive manner its rules of play, recruitment, and retention of athletes. Our central point is that contrary to the proponents of the monopsony thesis, the collegiate athletic market is subject to the self-correcting mechanism of market pressures. We have reason to believe that the proposed extension of the antitrust enforcement to the NCAA rules or proposed changes in sports law explicitly or implicitly recommended by the proponents of the cartel thesis would be not only unnecessary but also counterproductive.
Although a closer examination of the McKenzie and Lee’s critique of the economists’ cartel story is beyond the scope of this comment, I find it compelling.
Conclusion
In sum, the claim that antitrust may properly be applied to combat the alleged “exploitation” of college athletes by NCAA compensation regulations does not stand up to scrutiny. The NCAA’s rules that define the scope of amateurism may be imperfect, but there is no reason to think that empowering federal judges to second guess and reformulate NCAA athletic compensation rules would yield a more socially beneficial (let alone optimal) outcome. (Believing that the federal judiciary can optimally reengineer core NCAA amateurism rules is a prime example of the Nirvana fallacy at work.) Furthermore, a Supreme Court decision affirming the 9th Circuit could do broad mischief by undermining case law that has accorded joint venturers substantial latitude to design the core features of their collective enterprise without judicial second-guessing. It is to be hoped that the Supreme Court will do the right thing and strongly reaffirm the NCAA’s authority to design and reformulate its core athletic amateurism product as it sees fit.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by John Newman, Associate Professor, University of Miami School of Law; Advisory Board Member, American Antitrust Institute; Affiliated Fellow, Thurman Arnold Project, Yale; Former Trial Attorney, DOJ Antitrust Division.)]
Cooperation is the basis of productivity. The war of all against all is not a good model for any economy.
Who said it—a rose-emoji Twitter Marxist, or a card-carrying member of the laissez faire Chicago School of economics? If you guessed the latter, you’d be right. Frank Easterbrook penned these words in an antitrust decision written shortly after he left the University of Chicago to become a federal judge. Easterbrook’s opinion, now a textbook staple, wholeheartedly endorsed a cooperative agreement between two business owners not to compete with each another.
But other enforcers and judges have taken a far less favorable view of cooperation—particularly when workers are the ones cooperating. A few years ago, in an increasingly rare example of interagency agreement, the DOJ and FTC teamed up to argue against a Seattle ordinance that would have permitted drivers to cooperatively bargain with Uber and Lyft. Why the hostility from enforcers? “Competition is the lynchpin of the U.S. economy,” explained Acting FTC Chairman Maureen Ohlhausen.
Should workers be able to cooperate to counter concentrated corporate power? Or is bellum omnium contra omnes truly the “lynchpin” of our industrial policy?
The coronavirus pandemic has thrown this question into sharper relief than ever before. Low-income workers—many of them classified as independent contractors—have launched multiple coordinated boycotts in an effort to improve working conditions. The antitrust agencies, once quick to condemn similar actions by Uber and Lyft drivers, have fallen conspicuously silent.
Why? Why should workers be allowed to negotiate cooperatively for a healthier workplace, yet not for a living wage? In a society largely organized around paying for basic social services, money is health—and even life itself.
Unraveling the Double Standard
Antitrust law, like the rest of industrial policy, involves difficult questions over which members of society can cooperate with one another. These laws allocate “coordination rights”. Before the coronavirus pandemic, industrial policy seemed generally to favor allocating these rights to corporations, while simultaneously denying them to workers and class-action plaintiffs. But, as the antitrust agencies’ apparent about-face on workplace organizing suggests, the times may be a-changing.
Some of today’s most existential threats to societal welfare—pandemics, climate change, pollution—will best be addressed via cooperation, not atomistic rivalry. On-the-ground stakeholders certainly seem to think so. Absent a coherent, unified federal policy to deal with the coronavirus pandemic, state governors have reportedly begun to consider cooperating to provide a coordinated regional response. Last year, a group of auto manufacturers voluntarily agreed to increase fuel-efficiency standards and reduce emissions. They did attract an antitrust investigation, but it was subsequently dropped—a triumph for pro-social cooperation. It was perhaps also a reminder that corporations, each of which is itself a cooperative enterprise, can still play the role they were historically assigned: serving the public interest.
Going forward, policy-makers should give careful thought to how their actions and inactions encourage or stifle cooperation. Judge Easterbrook praised an agreement between business owners because it “promoted enterprise”. What counts as legitimate “enterprise”, though, is an eminently contestable proposition.
The federal antitrust agencies’ anti-worker stance in particular seems ripe for revisiting. Its modern origins date back to the 1980s, when President Reagan’s FTC challenged a coordinated boycott among D.C.-area criminal-defense attorneys. The boycott was a strike of sorts, intended to pressure the city into increasing court-appointed fees to a level that would allow for adequate representation. (The mayor’s office, despite being responsible for paying the fees, actually encouraged the boycott.) As the sole buyer of this particular type of service, the government wielded substantial power in the marketplace. A coordinated front was needed to counter it. Nonetheless, the FTC condemned the attorneys’ strike as per se illegal—a label supposedly reserved for the worst possible anticompetitive behavior—and the U.S. Supreme Court ultimately agreed.
Reviving Cooperation
In the short run, the federal antitrust agencies should formally reverse this anti-labor course. When workers cooperate in an attempt to counter employers’ power, antitrust intervention is, at best, a misallocation of scarce agency resources. Surely there are (much) bigger fish to fry. At worst, hostility to such cooperation directly contravenes Congress’ vision for the antitrust laws. These laws were intended to protect workers from concentrated downstream power, not to force their exposure to it—as the federal agencies themselves have recognized elsewhere.
In the longer run, congressional action may be needed. Supreme Court antitrust case law condemning worker coordination should be legislatively overruled. And, in a sharp departure from the current trend, we should be making it easier, not harder, for workers to form cooperative unions. Capital can be combined into a legal corporation in just a few hours, while it takes more than a month to create an effective labor union. None of this is to say that competition should be abandoned—much the opposite, in fact. A market that pits individual workers against highly concentrated cooperative entities is hardly “competitive”.
Thinking more broadly, antitrust and industrial policy may need to allow—or even encourage—cooperation in a number of sectors. Automakers’ and other manufacturers’ voluntary efforts to fight climate change should be lauded and protected, not investigated. Where cooperation is already shielded and even incentivized, as is the case with corporations, affirmative steps may be needed to ensure that the public interest is being furthered.
The current moment is without precedent. Industrial policy is destined, and has already begun, to change. Although competition has its place, it cannot serve as the sole lynchpin for a just economy. Now more than ever, a revival of cooperation is needed.
Last Thursday and Friday, Truth on the Market hosted a symposium analyzing the Draft Vertical Merger Guidelines from the FTC and DOJ. The relatively short draft guidelines provided ample opportunity for discussion, as evidenced by the stellar roster of authors thoughtfully weighing in on the topic.
We want to thank all of the participants for their excellent contributions. All of the posts are collected here, and below I briefly summarize each in turn.
Hovenkamp views the draft guidelines as a largely positive development for the state of antitrust enforcement. Beginning with an observation — as was common among participants in the symposium — that the existing guidelines are outdated, Hovenkamp believes that the inclusion of 20% thresholds for market share and related product use represent a reasonable middle position between the extremes of zealous antitrust enforcement and non-enforcement.
Hovenkamp also observes that, despite their relative brevity, the draft guidelines contain much by way of reference to the 2010 Horizontal Merger Guidelines. Ultimately Hovenkamp believes that, despite the relative lack of detail in some respects, the draft guidelines are an important step in elaborating the “economic approaches that the agencies take toward merger analysis, one in which direct estimates play a larger role, with a comparatively reduced role for more traditional approaches depending on market definition and market share.”
Finally, he notes that, while the draft guidelines leave the current burden of proof in the hands of challengers, the presumption that vertical mergers are “invariably benign, particularly in highly concentrated markets or where the products in question are differentiated” has been weakened.
Neuchterlein finds it hard to square elements of the draft vertical merger guidelines with both the past forty years of US enforcement policy as well as the empirical work confirming the largely beneficial nature of vertical mergers. Related to this, the draft guidelines lack genuine limiting principles when describing speculative theories of harm. Without better specificity, the draft guidelines will do little as a source of practical guidance.
One criticism from Neuchterlein is that the draft guidelines blur the distinction between “harm to competition” and “harm to competitors” by, for example, focusing on changes to rivals’ access to inputs and lost sales.
Neuchterlein also takes issue with what he characterizes as the “arbitrarily low” 20 percent thresholds. In particular, he finds the fact that the two separate 20 percent thresholds (relevant market and related product) being linked leads to a too-small set of situations in which firms might qualify for the safe harbor. Instead, by linking the two thresholds, he believes the provision does more to facilitate the agencies’ discretion, and little to provide clarity to firms and consumers.
While Kolasky and Giordano believe that the 1984 guidelines are badly outdated, they also believe that the draft guidelines fail to recognize important efficiencies, and fail to give sufficiently clear standards for challenging vertical mergers.
By contrast, Kolasky and Giordano believe that the 2008 EU vertical merger guidelines provide much greater specificity and, in some cases, the 1984 guidelines were better aligned with the 2008 EU guidelines. Losing that specificity in the new draft guidelines sets back the standards. As such, they recommend that the DOJ and FTC adopt the EU vertical merger guidelines as a model for the US.
To take one example, the draft guidelines lose some of the important economic distinctions between vertical and horizontal mergers and need to be clarified, in particular with respect to burdens of proof related to efficiencies. The EU guidelines also provide superior guidance on how to distinguish between a firm’s ability and its incentive to raise rivals’ costs.
Slade welcomes the new draft guidelines and finds them to be a good effort, if in need of some refinement. She believes the agencies were correct to defer to the 2010 Horizontal Merger Guidelines for the the conceptual foundations of market definition and concentration, but believes that the 20 percent thresholds don’t reveal enough information. She believes that it would be helpful “to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa.”
Slade also takes issue with the way the draft guidelines deal with EDM. Although she does not believe that EDM should always be automatically assumed, the guidelines do not offer enough detail to determine the cases where it should not be.
For Slade, the guidelines also fail to include a wide range of efficiencies that can arise from vertical integration. For instance “organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms” are important considerations that the draft guidelines should acknowledge.
Slade also advises caution when simulating vertical mergers. They are much more complex than horizontal simulations, which means that “vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading.”
Wright et al. commend the agencies for highlighting important analytical factors while avoiding “untested merger assessment tools or theories of harm.”
They do, however, offer some points for improvement. First, EDM should be clearly incorporated into the unilateral effects analysis. The way the draft guidelines are currently structured improperly leaves the role of EDM in a sort of “limbo” between effects analysis and efficiencies analysis that could confuse courts and lead to an incomplete and unbalanced assessment of unilateral effects.
Second, Wright et al. also argue that the 20 percent thresholds in the draft guidelines do not have any basis in evidence or theory, nor are they of “any particular importance to predicting competitive effects.”
Third, by abandoning the 1984 guidelines’ acknowledgement of the generally beneficial effects of vertical mergers, the draft guidelines reject the weight of modern antitrust literature and fail to recognize “the empirical reality that vertical relationships are generally procompetitive or neutral.”
Finally, the draft guidelines should be more specific in recognizing that there are transaction costs associated with integration via contract. Properly conceived, the guidelines should more readily recognize that efficiencies arising from integration via merger are cognizable and merger specific.
A key criticism offered by Werden and Froeb in their post is that “the proposed Guidelines do not set out conditions necessary or sufficient for the agencies to conclude that a merger likely would substantially lessen competition.” The draft guidelines refer to factors the agencies may consider as part of their deliberation, but ultimately do not give an indication as to how those different factors will be weighed.
Further, Werden and Froeb believe that the draft guidelines fail even to communicate how the agencies generally view the competitive process — in particular, how the agencies’ views regard the critical differences between horizontal and vertical mergers.
Jacobson and Edelson begin with an acknowledgement that the guidelines are outdated and that there is a dearth of useful case law, thus leading to a need for clarified rules. Unfortunately, they do not feel that the current draft guidelines do nearly enough to satisfy this need for clarification.
Generally positive about the 20% thresholds in the draft guidelines, Jacobson and Edelson nonetheless feel that this “loose safe harbor” leaves some problematic ambiguity. For example, the draft guidelines endorse a unilateral foreclosure theory of harm, but leave unspecified what actually qualifies as a harm. Also, while the Baker Hughes burden shifting framework is widely accepted, the guidelines fail to specify how burdens should be allocated in vertical merger cases.
The draft guidelines also miss an important opportunity to specify whether or not EDM should be presumed to exist in vertical mergers, and whether it should be presumptively credited as merger-specific.
Brennan’s post focused on what he referred to as “pure” vertical mergers that do not include concerns about expansion into upstream or downstream markets. Brennan notes the highly complex nature of speculative theories of vertical harms that can arise from vertical mergers. Consequently, he concludes that, with respect to blocking pure vertical mergers,
“[I]t is not clear that we are better off expending the resources to see whether something is bad, rather than accepting the cost of error from adopting imperfect rules — even rules that imply strict enforcement. Pure vertical merger may be an example of something that we might just want to leave be.”
Cernak’s post examines the absences and ambiguities in the draft guidelines as compared to the 1984 guidelines. He notes the absence of some theories of harm — for instance, the threat of regulatory evasion. And then moves on to point out the ambiguity in how the draft guidelines deal with pleading and proving EDM.
Specifically, the draft guidelines are unclear as to how EDM should be treated. Is EDM an affirmative defense, or is it a factor that agencies are required to include as part of their own analysis? In Cernak’s opinion, the agencies should be clearer on the point.
Fruits observes that the attempt of the draft guidelines to clarify how the Agencies think about mergers and competition actually demonstrates how complex markets, related products, and dynamic competition actually are.
Fruits goes on to describe how the nature of assumptions necessary to support the speculative theories of harm that the draft guidelines may rely upon are vulnerable to change. Ultimately, relying on such theories and strong assumptions may make market definition of even “obvious” markets and products a fraught exercise that devolves into a battle of experts.
Pozen et al. believe that the draft guidelines inadvisably move the US away from accepted international standards. The 20 percent threshold in the draft guidelines is “arbitrarily low” given the generally pro competitive nature of vertical combinations.
Instead, DOJ and the FTC should consider following the approaches taken by the EU, Japan and Chile by favoring a 30 percent threshold for challenges along with a post-merger HHI measure below 2000.
Sher and McDonald describe how the draft Vertical guidelines miss a valuable opportunity to clarify speculative theories harm based on “potential competition.”
In particular, the draft guidelines should address the literature that demonstrates that vertical acquisition of small tech firms by large tech firms is largely complementary and procompetitive. Large tech firms are good at process innovation and the smaller firms are good at product innovation leading to specialization and the realization of efficiencies through acquisition.
Further, innovation in tech markets is driven by commercialization and exit strategy. Acquisition has become an important way for investors and startups to profit from their innovation. Vertical merger policy that is biased against vertical acquisition threatens this ecosystem and the draft guidelines should be updated to reflect this reality.
Rybnicek notes the common calls to withdraw the 1984 Non-Horizontal Merger Guidelines, but is skeptical that replacing them will be beneficial. Particularly, he believes there are major flaws in the draft guidelines that would lead to suboptimal merger policy at the Agencies.
One concern is that the draft guidelines could easily lead to the impression that vertical mergers are as likely to lead to harm as horizontal mergers. But that is false and easily refuted by economic evidence and logic. By focusing on vertical transactions more than the evidence suggests is necessary, the Agencies will waste resources and spend less time pursuing enforcement of actually anticompetitive transactions.
Rybicek also notes that, in addition to the 20 percent threshold “safe harbor” being economically unsound, they will likely create a problematic “sufficient condition” for enforcement.
Rybnicek believes that the draft guidelines minimize the significant role of EDM and efficiencies by pointing to the 2010 Horizontal Merger Guidelines for analytical guidance. In the horizontal context, efficiencies are exceedingly difficult to prove, and it is unwarranted to apply the same skeptical treatment of efficiencies in the vertical merger context.
Ultimately, Rybnicek concludes that the draft guidelines do little to advance an understanding of how the agencies will look at a vertical transaction, while also undermining the economics and theory that have guided antitrust law.
White believes that there is a gaping absence in the draft guidelines insofar as they lack an adequate market definition paradigm. White notes that markets need to be defined in a way that permits a determination of market power (or not) post-merger, but the guidelines refrain from recommending a vertical-specific method for drawing market definition.
Instead, the draft guidelines point to the 2010 Horizontal Merger Guidelines for a market definition paradigm. Unfortunately, that paradigm is inapplicable in the vertical merger context. The way that markets are defined in the horizontal and vertical contexts is very different. There is a significant chance that an improperly drawn market definition based on the Horizontal Guidelines could understate the risk of harm from a given vertical merger.
Manne & Stout believe that there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm.
Among these, Manne & Stout believe that the Agencies should specifically address the alleged equivalence of integration via contract and integration via merger. They need to either repudiate this theory, or else more fully explain the extremely complex considerations that factor into different integration decisions for different firms.
In particular, there is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. It would be a categorical mistake for the draft guidelines to permit an inference that simply because an integration could be achieved by contract, it follows that integration by merger deserves greater scrutiny per se.
A whole host of efficiency and non-efficiency related goals are involved in a choice of integration methods. But adopting a presumption against integration via merger necessary leads to (1) an erroneous assumption that efficiencies are functionally achievable in both situations and (2) a more concerning creation of discretion in the hands of enforcers to discount the non-efficiency reasons for integration.
Therefore, the agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.
Manne & Stout begin by observing that, while Agencies have the opportunity to enforce in either the case of merger or contract, defendants can frequently only realize efficiencies in the case of merger. Therefore, calling for a contract/merger equivalency amounts to a preference for more enforcement per se, and is less solicitous of concerns about loss of procompetitive arrangements. Moreover, Manne & Stout point out that there is currently no empirical basis for justifying the weighting of enforcement so heavily against vertical mergers.
Manne & Stout further observe that vertical merger enforcement is more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante because we lack fundamental knowledge about the effects of market structure and firm organization on innovation and dynamic competition.
Instead, the draft guidelines should adopt Williamson’s view of economic organizations: eschew the formal orthodox neoclassical economic lens in favor of organizational theory that focuses on complex contracts (including vertical mergers). Without this view, “We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.”
Critically, Manne & Stout argue that the guidelines focus on market share thresholds leads to an overly narrow view of competition. Instead of looking at static market analyses, the Agencies should include a richer set of observations, including those that involve “organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.”
Ultimately Manne & Stout suggest that the draft guidelines should be clarified to guide the Agencies and courts away from applying inflexible, formalistic logic that will lead to suboptimal enforcement.
In our first post, we discussed the weaknesses of an important theoretical underpinning of efforts to expand vertical merger enforcement (including, possibly, the proposed guidelines): the contract/merger equivalency assumption.
In this post we discuss the implications of that assumption and some of the errors it leads to — including some incorporated into the proposed guidelines.
There is no theoretical or empirical justification for more vertical enforcement
Tim Brennan makes a fantastic and regularly overlooked point in his post: If it’s true, as many claim (see, e.g., Steve Salop), that firms can generally realize vertical efficiencies by contracting instead of merging, then it’s also true that they can realize anticompetitive outcomes the same way. While efficiencies have to be merger-specific in order to be relevant to the analysis, so too do harms. But where the assumption is that the outcomes of integration can generally be achieved by the “less-restrictive” means of contracting, that would apply as well to any potential harms, thus negating the transaction-specificity required for enforcement. As Dennis Carlton notes:
There is a symmetry between an evaluation of the harms and benefits of vertical integration. Each must be merger-specific to matter in an evaluation of the merger’s effects…. If transaction costs are low, then vertical integration creates neither benefits nor harms, since everything can be achieved by contract. If transaction costs exist to prevent the achievement of a benefit but not a harm (or vice-versa), then that must be accounted for in a calculation of the overall effect of a vertical merger. (Dennis Carlton, Transaction Costs and Competition Policy)
Of course, this also means that those (like us) who believe that it is not so easy to accomplish by contract what may be accomplished by merger must also consider the possibility that a proposed merger may be anticompetitive because it overcomes an impediment to achieving anticompetitive goals via contract.
There’s one important caveat, though: The potential harms that could arise from a vertical merger are the same as those that would be cognizable under Section 2 of the Sherman Act. Indeed, for a vertical merger to cause harm, it must be expected to result in conduct that would otherwise be illegal under Section 2. This means there is always the possibility of a second bite at the apple when it comes to thwarting anticompetitive conduct.
The same cannot be said of procompetitive conduct that can arise only through merger if a merger is erroneously prohibited before it even happens.
Interestingly, Salop himself — the foremost advocate today for enhanced vertical merger enforcement — recognizes the issue raised by Brennan:
Exclusionary harms and certain efficiency benefits also might be achieved with vertical contracts and agreements without the need for a vertical merger…. It [] might be argued that the absence of premerger exclusionary contracts implies that the merging firms lack the incentive to engage in conduct that would lead to harmful exclusionary effects. But anticompetitive vertical contracts may face the same types of impediments as procompetitive ones, and may also be deterred by potential Section 1 enforcement. Neither of these arguments thus justify a more or less intrusive vertical merger policy generally. Rather, they are factors that should be considered in analyzing individual mergers. (Salop & Culley, Potential Competitive Effects of Vertical Mergers)
In the same article, however, Salop also points to the reasons why it should be considered insufficient to leave enforcement to Sections 1 and 2, instead of addressing them at their incipiency under Clayton Section 7:
While relying solely on post-merger enforcement might have appealing simplicity, it obscures several key facts that favor immediate enforcement under Section 7.
The benefit of HSR review is to prevent the delays and remedial issues inherent in after-the-fact enforcement….
There may be severe problems in remedying the concern….
Section 1 and Section 2 legal standards are more permissive than Section 7 standards….
The agencies might well argue that anticompetitive post-merger conduct was caused by the merger agreement, so that it would be covered by Section 7….
All in all, failure to address these kinds of issues in the context of merger review could lead to significant consumer harm and underdeterrence.
The points are (mostly) well-taken. But they also essentially amount to a preference for more and tougher enforcement against vertical restraints than the judicial interpretations of Sections 1 & 2 currently countenance — a preference, in other words, for the use of Section 7 to bolster enforcement against vertical restraints of any sort (whether contractual or structural).
The problem with that, as others have pointed out in this symposium (see, e.g., Nuechterlein; Werden & Froeb; Wright, et al.), is that there’s simply no empirical basis for adopting a tougher stance against vertical restraints in the first place. Over and over again the empirical research shows that vertical restraints and vertical mergers are unlikely to cause anticompetitive harm:
In reviewing this literature, two features immediately stand out: First, there is a paucity of support for the proposition that vertical restraints/vertical integration are likely to harm consumers. . . . Second, a far greater number of studies found that the use of vertical restraints in the particular context studied improved welfare unambiguously. (Cooper, et al, Vertical Restrictions and Antitrust Policy: What About the Evidence?)
[W]e did not have a particular conclusion in mind when we began to collect the evidence, and we… are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing, vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view…. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. (Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence)
In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets. (GAI Comment on Vertical Mergers)
To the extent that the proposed guidelines countenance heightened enforcement relative to the status quo, they fall prey to the same defect. And while it is unclear from the fairly terse guidelines whether this is animating them, the removal of language present in the 1984 Non-Horizontal Merger Guidelines acknowledging the relative lack of harm from vertical mergers (“[a]lthough non-horizontal mergers are less likely than horizontal mergers to create competitive problems…”) is concerning.
The shortcomings of orthodox economics and static formal analysis
There is also a further reason to think that vertical merger enforcement may be more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante (i.e., where arrangements among vertical firms are by contract): Our lack of knowledge about the effects of market structure and firm organization on innovation and dynamic competition, and the relative hostility to nonstandard contracting, including vertical integration:
[T]he literature addressing how market structure affects innovation (and vice versa) in the end reveals an ambiguous relationship in which factors unrelated to competition play an important role. (Katz & Shelanski, Mergers and Innovation)
The fixation on the equivalency of the form of vertical integration (i.e., merger versus contract) is likely to lead enforcers to focus on static price and cost effects, and miss the dynamic organizational and informational effects that lead to unexpected, increased innovation across and within firms.
In the hands of Oliver Williamson, this means that understanding firms in the real world entails taking an organization theory approach, in contrast to the “orthodox” economic perspective:
The lens of contract approach to the study of economic organization is partly complementary but also partly rival to the orthodox [neoclassical economic] lens of choice. Specifically, whereas the latter focuses on simple market exchange, the lens of contract is predominantly concerned with the complex contracts. Among the major differences is that non‐standard and unfamiliar contractual practices and organizational structures that orthodoxy interprets as manifestations of monopoly are often perceived to serve economizing purposes under the lens of contract. A major reason for these and other differences is that orthodoxy is dismissive of organization theory whereas organization theory provides conceptual foundations for the lens of contract. (emphasis added)
We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.
The competition that takes place in the real world and between various groups ultimately depends upon the institution of private contracts, many of which, including the firm itself, are nonstandard. Innovation includes the discovery of new organizational forms and the application of old forms to new contexts. Such contracts prevent or attenuate market failure, moving the market toward what economists would deem a more competitive result. Indeed, as Professor Coase pointed out, many markets deemed “perfectly competitive” are in fact the end result of complex contracts limiting rivalry between competitors. This contractual competition cannot produce perfect results — no human institution ever can. Nonetheless, the result is superior to that which would obtain in a (real) world without nonstandard contracting. These contracts do not depend upon the creation or enhancement of market power and thus do not produce the evils against which antitrust law is directed. (Alan Meese, Price Theory Competition & the Rule of Reason)
The pinched focus of the guidelines on narrow market definition misses the bigger picture of dynamic competition over time
The proposed guidelines (and the theories of harm undergirding them) focus upon indicia of market power that may not be accurate if assessed in more realistic markets or over more relevant timeframes, and, if applied too literally, may bias enforcement against mergers with dynamic-innovation benefits but static-competition costs.
Similarly, the proposed guidelines’ enumeration of potential efficiencies doesn’t really begin to cover the categories implicated by the organization of enterprise around dynamic considerations.
The proposed guidelines’ efficiencies section notes that:
Vertical mergers bring together assets used at different levels in the supply chain to make a final product. A single firm able to coordinate how these assets are used may be able to streamline production, inventory management, or distribution, or create innovative products in ways that would have been hard to achieve though arm’s length contracts. (emphasis added)
But it is not clear than any of these categories encompasses organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.
As Thomas Jorde and David Teece write:
For innovations to be commercialized, the economic system must somehow assemble all the relevant complementary assets and create a dynamically-efficient interactive system of learning and information exchange. The necessary complementary assets can conceivably be assembled by either administrative or market processes, as when the innovator simply licenses the technology to firms that already own or are willing to create the relevant assets. These organizational choices have received scant attention in the context of innovation. Indeed, the serial model relies on an implicit belief that arm’s-length contracts between unaffiliated firms in the vertical chain from research to customer will suffice to commercialize technology. In particular, there has been little consideration of how complex contractual arrangements among firms can assist commercialization — that is, translating R&D capability into profitable new products and processes….
When IP protection for a given set of valuable pieces of “know-how” is strong — easily defendable, unique patents, for example — firms can rely on property rights to efficiently contract with vertical buyers and sellers. But in cases where the valuable “know how” is less easily defended as IP — e.g. business process innovation, managerial experience, distributed knowledge, corporate culture, and the like — the ability to partially vertically integrate through contract becomes more difficult, if not impossible.
Perhaps employing these assets is part of what is meant in the draft guidelines by “streamline.” But the very mention of innovation only in the technological context of product innovation is at least some indication that organizational innovation is not clearly contemplated.
This is a significant lacuna. The impact of each organizational form on knowledge transfers creates a particularly strong division between integration and contract. As Enghin Atalay, Ali Hortaçsu & Chad Syverson point out:
That vertical integration is often about transfers of intangible inputs rather than physical ones may seem unusual at first glance. However, as observed by Arrow (1975) and Teece (1982), it is precisely in the transfer of nonphysical knowledge inputs that the market, with its associated contractual framework, is most likely to fail to be a viable substitute for the firm. Moreover, many theories of the firm, including the four “elemental” theories as identified by Gibbons (2005), do not explicitly invoke physical input transfers in their explanations for vertical integration. (Enghin Atalay, et al., Vertical Integration and Input Flows) (emphasis added)
There is a large economics and organization theory literature discussing how organizations are structured with respect to these sorts of intangible assets. And the upshot is that, while we start — not end, as some would have it — with the Coasian insight that firm boundaries are necessarily a function of production processes and not a hard limit, we quickly come to realize that it is emphatically not the case that integration-via-contract and integration-via-merger are always, or perhaps even often, viable substitutes.
Conclusion
The contract/merger equivalency assumption, coupled with a “least-restrictive alternative” logic that favors contract over merger, puts a thumb on the scale against vertical mergers. While the proposed guidelines as currently drafted do not necessarily portend the inflexible, formalistic application of this logic, they offer little to guide enforcers or courts away from the assumption in the important (and perhaps numerous) cases where it is unwarranted.
[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.
This post is authored byGeoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics ); and Kristian Stout (Associate Director, ICLE).]
Although it is doubtless correct that the 1984 guidelines don’t reflect the latest economic knowledge, it is by no means clear that this has actually been a problem — or that a new set of guidelines wouldn’t create even greater problems. Indeed, as others have noted in this symposium, there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm.
We can do little better in expressing our reservations that new guidelines are needed than did the current Chairman of the FTC, Joe Simons, writing on this very blog in a symposium on what became the 2010 Horizontal Merger Guidelines. In a post entitled, Revisions to the Merger Guidelines: Above All, Do No Harm, Simons writes:
My sense is that there is no need to revise the DOJ/FTC Horizontal Merger Guidelines, with one exception…. The current guidelines lay out the general framework quite well and any change in language relative to that framework are likely to create more confusion rather than less. Based on my own experience, the business community has had a good sense of how the agencies conduct merger analysis…. If, however, the current administration intends to materially change the way merger analysis is conducted at the agencies, then perhaps greater revision makes more sense. But even then, perhaps the best approach is to try out some of the contemplated changes (i.e. in actual investigations) and publicize them in speeches and the like before memorializing them in a document that is likely to have some substantial permanence to it.
Wise words. Unless, of course, “the current [FTC] intends to materially change the way [vertical] merger analysis is conducted.” But the draft guidelines don’t really appear to portend a substantial change, and in several ways they pretty accurately reflect agency practice.
What we want to draw attention to, however, is an implicit underpinning of the draft guidelines that we believe the agencies should clearly disavow (or at least explain more clearly the complexity surrounding): the extent and implications of the presumed functional equivalence of vertical integration by contract and by merger — the contract/merger equivalency assumption.
Vertical mergers and their discontents
The contract/merger equivalency assumption has been gaining traction with antitrust scholars, but it is perhaps most clearly represented in some of Steve Salop’s work. Salop generally believes that vertical merger enforcement should be heightened. Among his criticisms of current enforcement is his contention that efficiencies that can be realized by merger can often also be achieved by contract. As he discussed during his keynote presentation at last year’s FTC hearing on vertical mergers:
And, finally, the key policy issue is the issue is not about whether or not there are efficiencies; the issue is whether the efficiencies are merger-specific. As I pointed out before, Coase stressed that you can get vertical integration by contract. Very often, you can achieve the vertical efficiencies if they occur, but with contracts rather than having to merge.
And later, in the discussion following his talk:
If there is vertical integration by contract… it meant you could get all the efficiencies from vertical integration with a contract. You did not actually need the vertical integration.
Salop thus argues that because the existence of a “contract solution” to firm problems can often generate the same sorts of efficiencies as when firms opt to merge, enforcers and courts should generally adopt a presumption against vertical mergers relative to contracting:
Coase’s door swings both ways: Efficiencies often can be achieved by vertical contracts, without the potential anticompetitive harms from merger.
In that vertical restraints are characterized as “just” vertical integration “by contract,” then claimed efficiencies in problematical mergers might be achieved with non-merger contracts that do not raise the same anticompetitive concerns. (emphasis in original)
(Salop isn’t alone in drawing such a conclusion, of course; Carl Shapiro, for example, has made a similar point (as have others)).
In our next post we explore the policy errors implicated by this contract/merger equivalency assumption. But here we want to consider whether it makes logical sense in the first place.
The logic of vertical integration is not commutative
It is true that, where contracts are observed, they are likely as (or more, actually) efficient than merger. But, by the same token, it is also true that where mergers are observed they are likely more efficient than contracts. Indeed, the entire reason for integration is efficiency relative to what could be done by contract — this is the essence of the so-called “make-or-buy” decision.
For example, a firm that decides to buy its own warehouse has determined that doing so is more efficient than renting warehouse space. Some of these efficiencies can be measured and quantified (e.g., carrying costs of ownership vs. the cost of rent), but many efficiencies cannot be easily measured or quantified (e.g., layout of the facility or site security). Under the contract/merger equivalency assumption, the benefits of owning a warehouse can be achieved “very often” by renting warehouse space. But the fact that many firms using warehouses own some space and rent some space indicates that the make-or-buy decision is often unique to each firm’s idiosyncratic situation. Moreover, the distinctions driving those differences will not always be readily apparent, and whether contracting or integrating is preferable in any given situation may not be inferred from the existence of one or the other elsewhere in the market — or even in the same firm!
There is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. The two are, quite simply, different bargaining environments, each with a different risk and cost allocation; accounting treatment; effect on employees, customers, and investors; tax consequence, etc. Even if the parties accomplished nominally “identical” outcomes, they would not, in fact, be identical.
Meanwhile, what if the reason for failure to contract, or the reason to prefer merger, has nothing to do with efficiency? What if there were no anticompetitive aim but there were a tax advantage? What if one of the parties just wanted a larger firm in order to satisfy the CEO’s ego? That these are not cognizable efficiencies under antitrust law is clear. But the adoption of a presumption of equivalence between contract and merger would — ironically — entail their incorporation into antitrust law just the same — by virtue of their effective prohibition under antitrust law.
In other words, if the assumption is that contract and merger are equally efficient unless proven otherwise, but the law adopts a suspicion (or, even worse, a presumption) that vertical mergers are anticompetitive which can be rebutted only with highly burdensome evidence of net efficiency gain, this effectively deputizes antitrust law to enforce a preconceived notion of “merger appropriateness” that does not necessarily turn on efficiencies. There may (or may not) be sensible policy reasons for adopting such a stance, but they aren’t antitrust reasons.
More fundamentally, however, while there are surely some situations in which contractual restraints might be able to achieve similar organizational and efficiency gains as a merger, the practical realities of achieving not just greater efficiency, but a whole host of non-efficiency-related, yet nonetheless valid, goals, are rarely equivalent between the two.
It may be that the parties don’t know what they don’t know to such an extent that a contract would be too costly because it would be too incomplete, for example. But incomplete contracts and ambiguous control and ownership rights aren’t (as much of) an issue on an ongoing basis after a merger.
As noted, there is no basis for assuming that the structure of a merger and a contract would be identical. In the same way, there is no basis for assuming that the knowledge transfer that would result from a merger would be the same as that which would result from a contract — and in ways that the parties could even specify or reliably calculate in advance. Knowing that the prospect for knowledge “synergies” would be higher with a merger than a contract might be sufficient to induce the merger outcome. But asked to provide evidence that the parties could not engage in the same conduct via contract, the parties would be unable to do so. The consequence, then, would be the loss of potential gains from closer integration.
At the same time, the cavalier assumption that parties would be able — legally — to enter into an analogous contract in lieu of a merger is problematic, given that it would likely be precisely the form of contract (foreclosing downstream or upstream access) that is alleged to create problems with the merger in the first place.
I want to reemphasize that there are also rules against vertical restraints in antitrust laws, and so to say that the firms could achieve the mergers outcome by using vertical restraints is kind of putting them in a circular motion where we are telling them you cannot merge because you could do it by contract, and then we say, but these contract terms are not acceptable.
Indeed, legal risk is one of the reasons why a merger might be preferable to a contract, and because the relevant markets here are oligopoly markets, the possibility of impermissible vertical restraints between large firms with significant market share is quite real.
More important, the assumptions underlying the contention that contracts and mergers are functionally equivalent legal devices fails to appreciate the importance of varied institutional environments. Consider that one reason some takeovers are hostile is because incumbent managers don’t want to merge, and often believe that they are running a company as well as it can be run — that a change of corporate control would not improve efficiency. The same presumptions may also underlie refusals to contract and, even more likely, may explain why, to the other firm, a contract would be ineffective.
But, while there is no way to contract without bilateral agreement, there is a corporate control mechanism to force a takeover. In this institutional environment a merger may be easier to realize than a contract (and that applies even to a consensual merger, of course, given the hostile outside option). In this case, again, the assumption that contract should be the relevant baseline and the preferred mechanism for coordination is misplaced — even if other firms in the industry are successfully accomplishing the same thing via contract, and even if a contract would be more “efficient” in the abstract.
Conclusion
Properly understood, the choice of whether to contract or merge derives from a host of complicated factors, many of which are difficult to observe and/or quantify. The contract/merger equivalency assumption — and the species of “least-restrictive alternative” reasoning that would demand onerous efficiency arguments to permit a merger when a contract was notionally possible — too readily glosses over these complications and unjustifiably embraces a relative hostility to vertical mergers at odds with both theory and evidence.
Rather, as has long been broadly recognized, there can be no legally relevant presumption drawn against a company when it chooses one method of vertical integration over another in the general case. The agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.
[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.
This post is authored by Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division).]
The DOJ/FTC Draft Vertical Merger Guidelines establish a “safe harbor” of a 20% market share for each of the merging parties. But the issue of defining the relevant “market” to which the 20% would apply is not well addressed.
Although reference is made to the market definition paradigm that is offered by the DOJ’s and FTC’s Horizontal Merger Guidelines (“HMGs”), what is neglected is the following: Under the “unilateral effects” theory of competitive harm of the HMGs, the horizontal merger of two firms that sell differentiated products that are imperfect substitutes could lead to significant price increases if the second-choice product for a significant fraction of each of the merging firms’ customers is sold by the partner firm. Such unilateral-effects instances are revealed by examining detailed sales and substitution data with respect to the customers of only the two merging firms.
In such instances, the true “relevant market” is simply the products that are sold by the two firms, and the merger is effectively a “2-to-1” merger. Under these circumstances, any apparently broader market (perhaps based on physical or functional similarities of products) is misleading, and the “market” shares of the merging parties that are based on that broader market are under-representations of the potential for their post-merger exercise of market power.
With a vertical merger, the potential for similar unilateral effects* would have to be captured by examining the detailed sales and substitution patterns of each of the merging firms with all of their significant horizontal competitors. This will require a substantial, data-intensive effort. And, of course, if this effort is not undertaken and an erroneously broader market is designated, the 20% “market” share threshold will understate the potential for competitive harm from a proposed vertical merger.
* With a vertical merger, such “unilateral effects” could arise post-merger in two ways: (a) The downstream partner could maintain a higher price, since some of the lost profits from some of the lost sales could be recaptured by the upstream partner’s profits on the sales of components to the downstream rivals (which gain some of the lost sales); and (b) the upstream partner could maintain a higher price to the downstream rivals, since some of the latter firms’ customers (and the concomitant profits) would be captured by the downstream partner.
[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.
This post is authored byJan Rybnicek (Counsel at Freshfields Bruckhaus Deringer US LLP in Washington, D.C. and Senior Fellow and Adjunct Professor at the Global Antitrust Institute at the Antonin Scalia Law School at George Mason University).]
In an area where it may seem that agreement is rare, there is near universal agreement on the benefits of withdrawing the DOJ’s 1984 Non-Horizontal Merger Guidelines. The 1984 Guidelines do not reflect current agency thinking on vertical mergers and are not relied upon by businesses or practitioners to anticipate how the agencies may review a vertical transaction. The more difficult question is whether the agencies should now replace the 1984 Guidelines and, if so, what the modern guidelines should say.
There are several important reasons that counsel against issuing new vertical merger guidelines (VMGs). Most significantly, we likely are better off without new VMGs because they invariably will (1) send the wrong message to agency staff about the relative importance of vertical merger enforcement compared to other agency priorities, (2) create new sufficient conditions that tend to trigger wasteful investigations and erroneous enforcement actions, and (3) add very little, if anything, to our understanding of when the agencies will or will not pursue an in-depth investigation or enforcement action of a vertical merger.
Unfortunately, these problems are magnified rather than mitigated by the draft VMGs. But it is unlikely at this point that the agencies will hit the brakes and not issue new VMGs. The agencies therefore should make several key changes that would help prevent the final VMGs from causing more harm than good.
What is the Purpose of Agency Guidelines?
Before we can have a meaningful conversation about whether the draft VMGs are good or bad for the world, or how they can be improved to ensure they contribute positively to antitrust law, it is important to identify, and have a shared understanding about, the purpose of guidelines and their potential benefits.
In general, I am supportive of guidelines. In fact, I helped urge the FTC to issue its 2015 Policy Statement articulating the agency’s enforcement principles under its Section 5 Unfair Methods of Competition authority. As I have written before, guidelines can be useful if they accomplish two important goals: (1) provide insight and transparency to businesses and practitioners about the agencies’ analytical approach to an issue and (2) offer agency staff direction as to agency priorities while cabining the agencies’ broad discretion by tethering investigational or enforcement decisions to those guidelines. An additional benefit may be that the guidelines also could prove useful to courts interpreting or applying the antitrust laws.
Transparency is important for the obvious reason that it allows the business community and practitioners to know how the agencies will apply the antitrust laws and thereby allows them to evaluate if a specific merger or business arrangement is likely to receive scrutiny. But guidelines are not only consumed by the public. They also are used by agency staff. As a result, guidelines invariably influence how staff approaches a matter, including whether to open an investigation, how in-depth that investigation is, and whether to recommend an enforcement action. Lastly, for guidelines to be meaningful, they also must accurately reflect agency practice, which requires the agencies’ analysis to be tethered to an analytical framework.
As discussed below, there are many reasons to doubt that the draft VMGs can deliver on these goals.
Draft VMGs Will Lead to Bad Enforcement Policy While Providing Little Benefit
A chief concern with VMGs is that they will inadvertently usher in a new enforcement regime that treats horizontal and vertical mergers as co-equal enforcement priorities despite the mountain of evidence, not to mention simple logic, that mergers among competitors are a significantly greater threat to competition than are vertical mergers. The draft VMGs exacerbate rather than mitigate this risk by creating a false equivalence between vertical and horizontal merger enforcement and by establishing new minimum conditions that are likely to lead the agencies to pursue wasteful investigations of vertical transactions. And the draft VMGs do all this without meaningfully advancing our understanding of the conditions under which the agencies are likely to pursue investigations and enforcement against vertical mergers.
1. No Recognition of the Differences Between Horizontal and Vertical Mergers
One striking feature of the draft VMGs is that they fail to contextualize vertical mergers in the broader antitrust landscape. As a result, it is easy to walk away from the draft VMGs with the impression that vertical mergers are as likely to lead to anticompetitive harm as are horizontal mergers. That is a position not supported by the economic evidence or logic. It is of course true that vertical mergers can result in competitive harm; that is not a seriously contested point. But it is important to acknowledge and provide background for why that harm is significantly less likely than in horizontal cases. That difference should inform agency enforcement priorities. Potentially due to this the lack of framing, the draft VMGs tend to speak more about when the agencies may identify competitive harm rather than when they will not.
The draft VMGs would benefit greatly from a more comprehensive approach to understanding vertical merger transactions. The agencies should add language explaining that, whereas a consensus exists that eliminating a direct competitor always tends to increase the risk of unilateral effects (although often trivially), there is no such consensus that harm will result from the combination of complementary assets. In fact, the current evidence shows such vertical transactions tend to be procompetitive. Absent such language, the VMGs will over time misguidedly focus more agency resources into investigating vertical mergers where there is unlikely to be harm (with inevitably more enforcement errors) and less time on more important priorities, such as pursuing enforcement of anticompetitive horizontal transactions.
2. The 20% Safe Harbor Provides No Harbor and Will Become a Sufficient Condition
The draft VMGs attempt to provide businesses with guidance about the types of transactions the agencies will not investigate by articulating a market share safe harbor. But that safe harbor does not (1) appear to be grounded in any evidence, (2) is surprisingly low in comparison to the EU vertical merger guidelines, and (3) is likely to become a sufficient condition to trigger an in-depth investigation or enforcement.
The draft VMGs state:
The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20%, and the related product is used in less than 20% of the relevant market.
But in the very next sentence the draft VMGs render the safe harbor virtually meaningless, stating:
In some circumstance, mergers with shares below the threshold can give rise to competitive concerns.
This caveat comes despite the fact that the 20% threshold is low compared to other jurisdictions. Indeed, the EU’s guidelines create a 30% safe harbor. Nor is it clear what the basis is for the 20% threshold, either in economics or law. While it is important for the agencies to remain flexible, too much flexibility will render the draft VMGs meaningless. The draft VMGs should be less equivocal about the types of mergers that will not receive significant scrutiny and are unlikely to be the subject of enforcement action.
What may be most troubling about the market share safe harbor is the likelihood that it will establish general enforcement norms that did not previously exist. It is likely that agency staff will soon interpret (despite language stating otherwise) the 20% market share as the minimumnecessarycondition to open an in-depth investigation and to pursue an enforcement action. We have seen other guidelines’ tools have similar effects on agency analysis before (see, GUPPIs). This risk is only exacerbated where the safe harbor is not a true safe harbor that provides businesses with clarity on enforcement priorities.
3. Requirements for Proving EDM and Efficiencies Fails to Recognize Vertical Merger Context
The draft VMGs minimize the significant role of EDM and efficiencies in vertical mergers. The agencies frequently take a skeptical approach to efficiencies in the context of horizontal mergers and it is well-known that the hurdle to substantiate efficiencies is difficult, if not impossible, to meet. The draft VMGs oddly continue this skeptical approach by specifically referencing the standards discussed in the horizontal merger guidelines for efficiencies when discussing EDM and vertical merger efficiencies. The draft VMGs do not recognize that the combination of complementary products is inherently more likely to generate efficiencies than in horizontal mergers between competitors. The draft VMGs also oddly discuss EDM and efficiencies in separate sections and spend a trivial amount of time on what is the core motivating feature of vertical mergers. Even the discussion of EDM is as much about where there may be exceptions to EDM as it is about making clear the uncontroversial view that EDM is frequent in vertical transactions. Without acknowledging the inherent nature of EDM and efficiencies more generally, the final VMGs will send the wrong message that vertical merger enforcement should be on par with horizontal merger enforcement.
4. No New Insights into How Agencies Will Assess Vertical Mergers
Some might argue that the costs associated with the draft VMGs nevertheless are tolerable because the guidelines offer significant benefits that far outweigh their costs. But that is not the case here. The draft VMGs provide no new information about how the agencies will review vertical merger transactions and under what circumstances they are likely to seek enforcement actions. And that is because it is a difficult if not impossible task to identify any such general guiding principles. Indeed, unlike in the context of horizontal transactions where an increase in market power informs our thinking about the likely competitive effects, greater market power in the context of a vertical transaction that combines complements creates downward pricing pressure that often will dominate any potential competitive harm.
The draft VMGs do what they can, though, which is to describe in general terms several theories of harm. But the benefits from that exercise are modest and do not outweigh the significant risks discussed above. The theories described are neither novel or unknown to the public today. Nor do the draft VMGs explain any significant new thinking on vertical mergers, likely because there has been none that can provide insight into general enforcement principles. The draft VMGs also do not clarify changes to statutory text (because it has not changed) or otherwise clarify judicial rulings or past enforcement actions. As a result, the draft VMGs do not offer sufficient benefits that would outweigh their substantial cost.
Conclusion
Despite these concerns, it is worth acknowledging the work the FTC and DOJ have put into preparing the draft VMGs. It is no small task to articulate a unified position between the two agencies on an issue such as vertical merger enforcement where so many have such strong views. To the agencies’ credit, the VMGs are restrained in not including novel or more adventurous theories of harm. I anticipate the DOJ and FTC will engage with commentators and take the feedback seriously as they work to improve the final VMGs.
[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.
This post is authored by Sharis Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division); with Timothy Cornell (Partner, Clifford Chance); Brian Concklin (Counsel, Clifford Chance); and Michael Van Arsdall (Counsel, Clifford Chance).]
The draft Vertical Merger Guidelines (“Guidelines”) miss a real opportunity to provide businesses with consistent guidance across jurisdictions and to harmonize the international approach to vertical merger review.
As drafted, the Guidelines indicate the agencies will evaluate market shares and concentration — measured using the same methodology described in the long-standing Horizontal Merger Guidelines — but not use these metrics as a “rigid screen.” On that basis the Guidelines establish a “soft” 20 percent threshold, where the U.S. Agencies are “unlikely to challenge a vertical merger” if the merging parties have less than a 20 percent share of the relevant market and the related product is used in less than 20 percent of the relevant market.
We suggest, instead, that the Guidelines be aligned with those of other jurisdictions, namely the EU non-horizontal merger guidelines [for an extended discussion of which, see Bill Kolaasky’s symposium post here —ed.]. The European Commission’s guidelines state the European Commission is “unlikely to find concern” with a vertical merger affecting less than 30 percent of the relevant markets and the post-merger HHIs fall below 2000. Among others, Japan and Chile employ a similarly higher bar than the Guidelines. A discrepancy between the U.S. and other international guidelines causes unnecessary uncertainty within the business and legal communities and could lead to inconsistent enforcement outcomes.In any event, beyond the dangers created by a lack of international harmonization, setting the threshold at 20 percent seems arbitrarily low given the pro-competitive nature of most vertical mergers. Setting the threshold so low fails to recognize the inherently procompetitive nature of the majority of vertical combinations, and could result in false positives, and undue cost and delay.
[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.
This post is authored by Eric Fruits (Chief Economist, International Center for Law & Economics and Professor of Economics, Portland State University).]
Vertical mergers are messy. They’re messy for the merging firms and they’re especially messy for regulators charged with advancing competition without advantaging competitors. Firms rarely undertake a vertical merger with an eye toward monopolizing a market. Nevertheless, competitors and competition authorities excel at conjuring up complex models that reveal potentially harmful consequences stemming from vertical mergers. In their post, Gregory J. Werden and Luke M. Froeb highlight the challenges in evaluating vertical mergers:
[V]ertical mergers produce anticompetitive effects only through indirect mechanisms with many moving parts, which makes the prediction of competitive effects from vertical mergers more complex and less certain.
There’s a recurring theme throughout this symposium: The current Vertical Merger Guidelines should be updated; the draft Guidelines are a good start, but they raise more questions than they answer. Other symposium posts have hit on the key ups and downs of the draft Guidelines.
In this post, I use the draft Guidelines’ examples to highlight how messy vertical mergers can be. The draft Guidelines’ examples are meant to clarify the government’s thinking on markets and mergers. In the end, however, they demonstrate the complexity in identifying relevant markets, related products, and the dynamic interaction of competition. I will focus on two examples provided in the draft Guidelines. Warning: you’re going to read a lot about oranges.
In the following example from the draft Guidelines, the relevant market is the wholesale supply of orange juice in region X and Company B’s supply of oranges is the related product:
Example 2: Company A is a wholesale supplier of orange juice. It seeks to acquire Company B, an owner of orange orchards. The Agencies may consider whether the merger would lessen competition in the wholesale supply of orange juice in region X (the relevant market). The Agencies may identify Company B’s supply of oranges as the related product. Company B’s oranges are used in fifteen percent of the sales in the relevant market for wholesale supply of orange juice. The Agencies may consider the share of fifteen percent as one indicator of the competitive significance of the related product to participants in the relevant market.
The figure below illustrates one hypothetical structure. Company B supplies an equal amount of oranges to Company A and two other wholesalers, C and D, totalling 15 percent of orange juice sales in region X. Orchards owned by others account for the remaining 85 percent. For the sake of argument, assume all the wholesalers are the same size in which case Company B’s orchard would supply 20 percent of the oranges used by wholesalers A, C, and D.
Orange juice sold in a particular region is just one of many uses for oranges. The juice can be sold as fresh liquid, liquid from concentrate, or frozen concentrate. The fruit can be sold as fresh produce or it can be canned, frozen, or processed into marmalade. Many of these products can be sold outside of a particular region and can be sold outside of the United States. This is important in considering the next example from the draft Guidelines.
Example 3: In Example 2, the merged firm may be able to profitably stop supplying oranges (the related product) to rival orange juice suppliers (in the relevant market). The merged firm will lose the margin on the foregone sales of oranges but may benefit from increased sales of orange juice if foreclosed rivals would lose sales, and some of those sales were diverted to the merged firm. If the benefits outweighed the costs, the merged firm would find it profitable to foreclose. If the likely effect of the foreclosure were to substantially lessen competition in the orange juice market, the merger potentially raises significant competitive concerns and may warrant scrutiny.
This is the classic example of raising rivals’ costs. Under the standard formulation, the merged firm will produce oranges at the orchard’s marginal cost — in theory, the price it pays for oranges would be the same both pre- and post-merger. If orchard B does not sell its oranges to the non-integrated wholesalers C, D, and E, the other orchards will be able to charge a price greater than their marginal cost of production and greater than the pre-merger market price for oranges. The higher price of oranges used by non-integrated wholesalers will then be reflected in higher prices for orange juice sold by the wholesalers.
The merged firm’s juice prices will be higher post-merger because its unintegrated rivals’ juice prices will be higher, thus increasing the merged firm’s profits. The merged firm and unintegrated orchards would be the “winners;” unintegrated wholesalers and consumers would be the “losers.” Under a consumer welfare standard the result could be deemed anticompetitive. Under a total welfare standard, anything goes.
But, the classic example of raising rivals’ costs is based on some strong assumptions. It assumes that, pre-merger, all upstream firms price at marginal cost, which means there is no double marginalization. It assumes all the upstream firm’s products are perfectly identical. It assumes unintegrated firms don’t respond by integrating themselves. If one or more of these assumptions is not correct, more complex models — with additional (potentially unprovable) assumptions — must be employed. What begins as a seemingly straightforward theoretical example is now a battle of which expert’s models best fit the facts and best predicts the likely outcome.
In the draft Guidelines’ raising rivals’ costs example, it’s assumed the merged firm would refuse to sell oranges to rival downstream wholesalers. However, if rival orchards charge a sufficiently high price, the merged firm would profit from undercutting its rivals’ orange prices, while still charging a price greater than marginal cost. Thus, it’s not obvious that the merged firm has an incentive to cut off supply to downstream competitors. The extent of the pricing pressure on the merged firm to cheat on itself is an empirical matter that depends on how upstream and downstream firms react, or might react.
For example, using the figure above, if the merged firm stopped supplying oranges to rival wholesalers, then the merged firm’s orchard would supply 60 percent of the oranges used in the firm’s juice. Although wholesalers C and D would not get oranges from B’s orchards, they could obtain oranges from other orchards that are no longer supplying wholesaler A. In this case, the merged firm’s attempt at foreclosure would have no effect and there would be no harm to competition.
It’s possible the merged firm would divert some or all of its oranges to a “secondary” market, removing those oranges from the juice market. Rather than juicing oranges, the merged firm may decide to sell them as fresh produce; fresh citrus fruits account for 7 percent of Florida’s crop and 75% of California’s. This diversion would lead to a decline in the supply of oranges for juice and the price of this key input would rise.
But, as noted in the Guidelines’ example, this strategy would raise the merged firm’s costs along with its rivals. Moreover, rival orchards can respond to this strategy by diverting their own oranges from “secondary” markets to the juice market, in which case there may be no significant effect on the price of juice oranges. What begins as a seemingly straightforward theoretical example is now a complicated empirical matter. Or worse, it may just be a battle over which expert is the most convincing fortune teller.
Moreover, the merged firm may have legitimate business reasons for the merger and legitimate business reasons for reducing the supply of oranges to juice wholesalers. For example “citrus greening,” an incurable bacterial disease, has caused severe damage to Florida’s citrus industry, significantly reducing crop yields. A vertical merger could be one way to reduce supply risks. On the demand side, an increase in the demand for fresh oranges would guide firms to shift from juice and processed markets to the fresh market. What some would see as anticompetitive conduct, others would see as a natural and expected response to price signals.Because of the many alternative uses for oranges, it’s overly simplistic to declare that the supply of orange juice in a specific region is “the” relevant market. Orchards face a myriad of options in selling their products. Misshapen fruit can be juiced fresh or as frozen concentrate; smaller fruit can be canned or jellied. “Perfect” fruit can be sold as fresh produce, juice, canned, or jellied. Vertical integration with a juice wholesaler adds just one factor to the myriad factors affecting how and where an upstream supplier sells its products. Just as there is no single relevant market, in many cases there is no single related product — a fact that is especially relevant in vertical relationships. Unfortunately the draft Guidelines provide little guidance in these important areas.