The recent launch of the international Multilateral Pharmaceutical Merger Task Force (MPMTF) is just the latest example of burgeoning cooperative efforts by leading competition agencies to promote convergence in antitrust enforcement. (See my recent paper on the globalization of antitrust, which assesses multinational cooperation and convergence initiatives in greater detail.) In what is a first, the U.S. Federal Trade Commission (FTC), the U.S. Justice Department’s (DOJ) Antitrust Division, offices of state Attorneys General, the European Commission’s Competition Directorate, Canada’s Competition Bureau, and the U.K.’s Competition and Market Authority (CMA) jointly created the MPMTF in March 2021 “to update their approach to analyzing the effects of pharmaceutical mergers.”
To help inform its analysis, in May 2021 the MPMTF requested public comments concerning the effects of pharmaceutical mergers. The MPMTF sought submissions regarding (among other issues) seven sets of questions:
What theories of harm should enforcement agencies consider when evaluating pharmaceutical mergers, including theories of harm beyond those currently considered?
What is the full range of a pharmaceutical merger’s effects on innovation? What challenges arise when mergers involve proprietary drug discovery and manufacturing platforms?
In pharmaceutical merger review, how should we consider the risks or effects of conduct such as price-setting practices, reverse payments, and other ways in which pharmaceutical companies respond to or rely on regulatory processes?
How should we approach market definition in pharmaceutical mergers, and how is that implicated by new or evolving theories of harm?
What evidence may be relevant or necessary to assess and, if applicable, challenge a pharmaceutical merger based on any new or expanded theories of harm?
What types of remedies would work in the cases to which those theories are applied?
What factors, such as the scope of assets and characteristics of divestiture buyers, influence the likelihood and success of pharmaceutical divestitures to resolve competitive concerns?
My research assistant Andrew Mercado and I recently submitted comments for the record addressing the questions posed by the MPMTF. We concluded:
Federal merger enforcement in general and FTC pharmaceutical merger enforcement in particular have been effective in promoting competition and consumer welfare. Proposed statutory amendments to strengthen merger enforcement not only are unnecessary, but also would, if enacted, tend to undermine welfare and would thus be poor public policy. A brief analysis of seven questions propounded by the Multilateral Pharmaceutical Merger Task Force suggests that: (a) significant changes in enforcement policies are not warranted; and (b) investigators should employ sound law and economics analysis, taking full account of merger-related efficiencies, when evaluating pharmaceutical mergers.
While we leave it to interested readers to review our specific comments, this commentary highlights one key issue which we stressed—the importance of giving due weight to efficiencies (and, in particular, dynamic efficiencies) in evaluating pharma mergers. We also note an important critique by FTC Commissioner Christine Wilson of the treatment accorded merger-related efficiencies by U.S. antitrust enforcers.
Innovation in pharmaceuticals and vaccines has immensely significant economic and social consequences, as demonstrated most recently in the handling of the COVID-19 pandemic. As such, it is particularly important that public policy not stand in the way of realizing efficiencies that promote innovation in these markets. This observation applies directly, of course, to pharmaceutical antitrust enforcement, in general, and to pharma merger enforcement, in particular.
Regrettably, however, though general merger-enforcement policy has been generally sound, it has somewhat undervalued merger-related efficiencies.
Although U.S. antitrust enforcers give lip service to their serious consideration of efficiencies in merger reviews, the reality appears to be quite different, as documented by Commissioner Wilson in a 2020 speech.
Wilson’s General Merger-Efficiencies Critique: According to Wilson, the combination of finding narrow markets and refusing to weigh out-of-market efficiencies has created major “legal and evidentiary hurdles a defendant must clear when seeking to prove offsetting procompetitive efficiencies.” What’s more, the “courts [have] largely continue[d] to follow the Agencies’ lead in minimizing the importance of efficiencies.” Wilson shows that “the Horizontal Merger Guidelines text and case law appear to set different standards for demonstrating harms and efficiencies,” and argues that this “asymmetric approach has the obvious potential consequence of preventing some procompetitive mergers that increase consumer welfare.” Wilson concludes on a more positive note that this problem can be addressed by having enforcers: (1) treat harms and efficiencies symmetrically; and (2) establish clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.
While our filing with the MPMTF did not discuss Wilson’s general treatment of merger efficiencies, one would hope that the task force will appropriately weigh it in its deliberations. Our filing instead briefly addressed two “informational efficiencies” that may arise in the context of pharmaceutical mergers. These include:
More Efficient Resource Reallocation: The theory of the firm teaches that mergers may be motivated by the underutilization or misallocation of assets, or the opportunity to create welfare-enhancing synergies. In the pharmaceutical industry, these synergies may come from joining complementary research and development programs, combining diverse and specialized expertise that may be leveraged for better, faster drug development and more innovation.
Enhanced R&D: Currently, much of the R&D for large pharmaceutical companies is achieved through partnerships or investment in small biotechnology and research firms specializing in a single type of therapy. Whereas large pharmaceutical companies have expertise in marketing, navigating regulation, and undertaking trials of new drugs, small, research-focused firms can achieve greater advancements in medicine with smaller budgets. Furthermore, changes within firms brought about by a merger may increase innovation.
With increases in intellectual property and proprietary data that come from the merging of two companies, smaller research firms that work with the merged entity may have access to greater pools of information, enhancing the potential for innovation without increasing spending. This change not only raises the efficiency of the research being conducted in these small firms, but also increases the probability of a breakthrough without an increase in risk.
U.S. pharmaceutical merger enforcement has been fairly effective in forestalling anticompetitive combinations while allowing consumer welfare-enhancing transactions to go forward. Policy in this area should remain generally the same. Enforcers should continue to base enforcement decisions on sound economic theory fully supported by case-specific facts. Enforcement agencies could benefit, however, by placing a greater emphasis on efficiencies analysis. In particular, they should treat harms and efficiencies symmetrically (as recommend by Commissioner Wilson), and fully take into account likely resource reallocation and innovation-related efficiencies.
Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.
But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.
This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.
Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.
Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.
Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.
The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:
[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.
If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.
It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?
The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.
Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research:
Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.
But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:
Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.
In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.
Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.
Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:
Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.
He added that:
[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.
More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.
What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:
[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.
In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.
Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:
The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.
Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.
Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:
Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?
However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:
[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.
Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.
Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.
The Tragedy of the Commons
Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.
The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:
The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.
In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.
Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:
The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.
As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.
Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.
These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:
Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:
Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.
In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?
More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:
The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.
In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.
The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:
Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]
Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.
Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:
Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.
In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.
Killzones, Zoom, and TikTok
If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.
For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:
If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.
Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.
And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).
But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.
Zoom is one of the most salient instances. As I have written previously:
To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.
Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.
More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.
While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.
My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.
In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.
For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.
Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.
Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.
All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.
This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.
The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:
This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].
Lina Khan’s appointment as chair of the Federal Trade Commission (FTC) is a remarkable accomplishment. At 32 years old, she is the youngest chair ever. Her longstanding criticisms of the Consumer Welfare Standard and alignment with the neo-Brandeisean school of thought make her appointment a significant achievement for proponents of those viewpoints.
Her appointment also comes as House Democrats are preparing to mark up five bills designed to regulate Big Tech and, in the process, vastly expand the FTC’s powers. This expansion may combine with Khan’s appointment in ways that lawmakers considering the bills have not yet considered.
As things stand, the FTC under Khan’s leadership is likely to push for more extensive regulatory powers, akin to those held by the Federal Communications Commission (FCC). But these expansions would be trivial compared to what is proposed by many of the bills currently being prepared for a June 23 mark-up in the House Judiciary Committee.
The flagship bill—Rep. David Cicilline’s (D-R.I.) American Innovation and Choice Online Act—is described as a platform “non-discrimination” bill. I have already discussed what the real-world effects of this bill would likely be. Briefly, it would restrict platforms’ ability to offer richer, more integrated services at all, since those integrations could be challenged as “discrimination” at the cost of would-be competitors’ offerings. Things like free shipping on Amazon Prime, pre-installed apps on iPhones, or even including links to Gmail and Google Calendar at the top of a Google Search page could be precluded under the bill’s terms; in each case, there is a potential competitor being undermined.
But this shifts the focus to the FTC itself, and implies that it would have potentially enormous discretionary power under these proposals to enforce the law selectively.
Companies found guilty of breaching the bill’s terms would be liable for civil penalties of up to 15 percent of annual U.S. revenue, a potentially significant sum. And though the Supreme Court recently ruled unanimously against the FTC’s powers to levy civil fines unilaterally—which the FTC opposed vociferously, and may get restored by other means—there are two scenarios through which it could end up getting extraordinarily extensive control over the platforms covered by the bill.
The first course is through selective enforcement. What Singer above describes as a positive—the fact that enforcers would just let “benign” violations of the law be—would mean that the FTC itself would have tremendous scope to choose which cases it brings, and might do so for idiosyncratic, politicized reasons.
The second path would be to use these powers as leverage to get broad consent decrees to govern the conduct of covered platforms. These occur when a lawsuit is settled, with the defendant company agreeing to change its business practices under supervision of the plaintiff agency (in this case, the FTC). The Cambridge Analytica lawsuit ended this way, with Facebook agreeing to change its data-sharing practices under the supervision of the FTC.
This path would mean the FTC creating bespoke, open-ended regulation for each covered platform. Like the first path, this could create significant scope for discretionary decision-making by the FTC and potentially allow FTC officials to impose their own, non-economic goals on these firms. And it would require costly monitoring of each firm subject to bespoke regulation to ensure that no breaches of that regulation occurred.
“economic power as inextricably political. Power in industry is the power to steer outcomes. It grants outsized control to a few, subjecting the public to unaccountable private power—and thereby threatening democratic order. The account also offers a positive vision of how economic power should be organized (decentralized and dispersed), a recognition that forms of economic power are not inevitable and instead can be restructured.” [italics added]
Though I have focused on Cicilline’s flagship bill, others grant significant new powers to the FTC, as well. The data portability and interoperability bill doesn’t actually define what “data” is; it leaves it to the FTC to “define the term ‘data’ for the purpose of implementing and enforcing this Act.” And, as I’ve written elsewhere, data interoperability needs significant ongoing regulatory oversight to work at all, a responsibility that this bill also hands to the FTC. Even a move as apparently narrow as data portability will involve a significant expansion of the FTC’s powers and give it a greater role as an ongoing economic regulator.
In its June 21 opinion in NCAA v. Alston, a unanimous U.S. Supreme Court affirmed the 9th U.S. Circuit Court of Appeals and thereby upheld a district court injunction finding unlawful certain National Collegiate Athletic Association (NCAA) rules limiting the education-related benefits schools may make available to student athletes. The decision will come as no surprise to antitrust lawyers who heard the oral argument; the NCAA was portrayed as a monopsony cartel whose rules undermined competition by restricting compensation paid to athletes.
Alas, however, Alston demonstrates that seemingly “good facts” (including an apparently Scrooge-like defendant) can make very bad law. While superficially appearing to be a relatively straightforward application of Sherman Act rule of reason principles, the decision fails to come to grips with the relationship of the restraints before it to the successful provision of the NCAA’s joint venture product – amateur intercollegiate sports. What’s worse, Associate Justice Brett Kavanaugh’s concurring opinion further muddies the court’s murky jurisprudential waters by signaling his view that the NCAA’s remaining compensation rules are anticompetitive and could be struck down in an appropriate case (“it is not clear how the NCAA can defend its remaining compensation rules”). Prospective plaintiffs may be expected to take the hint.
In sum, the claim that antitrust may properly be applied to combat the alleged “exploitation” of college athletes by NCAA compensation regulations does not stand up to scrutiny. The NCAA’s rules that define the scope of amateurism may be imperfect, but there is no reason to think that empowering federal judges to second guess and reformulate NCAA athletic compensation rules would yield a more socially beneficial (let alone optimal) outcome. (Believing that the federal judiciary can optimally reengineer core NCAA amateurism rules is a prime example of the Nirvana fallacy at work.) Furthermore, a Supreme Court decision affirming the 9th Circuit could do broad mischief by undermining case law that has accorded joint venturers substantial latitude to design the core features of their collective enterprise without judicial second-guessing.
Unfortunately, my concerns about a Supreme Court affirmance of the 9th Circuit were realized. Associate Justice Neil Gorsuch’s opinion for the court in Alston manifests a blinkered approach to the NCAA “monopsony” joint venture. To be sure, it cites and briefly discusses key Supreme Court joint venture holdings, including 2006’s Texaco v. Dagher. Nonetheless, it gives short shrift to the efficiency-based considerations that counsel presumptive deference to joint venture design rules that are key to the nature of a joint venture’s product.
As a legal matter, the court felt obliged to defer to key district court findings not contested by the NCAA—including that the NCAA enjoys “monopsony power” in the student athlete labor market, and that the NCAA’s restrictions in fact decrease student athlete compensation “below the competitive level.”
However, even conceding these points, the court could have, but did not, take note of and assess the role of the restrictions under review in helping engender the enormous consumer benefits the NCAA confers upon consumers of its collegiate sports product. There is good reason to view those restrictions as an effort by the NCAA to address a negative externality that could diminish the attractiveness of the NCAA’s product for ultimate consumers, a result that would in turn reduce inter-brand competition.
[T]he NCAA’s consistent and growing popularity reflects a product—”amateur sports” played by students and identified with the academic tradition—that continues to generate enormous consumer interest. Moreover, it appears without dispute that the NCAA, while in control of the design of its own athletic products, has preserved their integrity as amateur sports, notwithstanding the commercial success of some of them, particularly Division I basketball and Football Subdivision football. . . . Over many years, the NCAA has continually adjusted its eligibility and participation rules to prevent colleges from pursuing their own interests—which certainly can involve “pay to play”—in ways that would conflict with the procompetitive aims of the collaboration. In this sense, the NCAA’s amateurism rules are a classic example of addressing negative externalities and free riding that often are inherent or arise in the collaboration context.
The use of contractual restrictions (vertical restraints) to counteract free riding and other negative externalities generated in manufacturer-distributor interactions are well-recognized by antitrust courts. Although the restraints at issue in NCAA (and many other joint venture situations) are horizontal in nature, not vertical, they may be just as important as other nonstandard contracts in aligning the incentives of member institutions to best satisfy ultimate consumers. Satisfying consumers, in turn, enhances inter-brand competition between the NCAA’s product and other rival forms of entertainment, including professional sports offerings.
Alan Meese made a similar point in a recent paper (discussing a possible analytical framework for the court’s then-imminent Alston analysis):
[U]nchecked bidding for the services of student athletes could result in a market failure and suboptimal product quality, proof that the restraint reduces student athlete compensation below what an unbridled market would produce should not itself establish a prima facie case. Such evidence would instead be equally consistent with a conclusion that the restraint eliminates this market failure and restores compensation to optimal levels.
The court’s failure to address the externality justification was compounded by its handling of the rule of reason. First, in rejecting a truncated rule of reason with an initial presumption that the NCAA’s restraints involving student compensation are procompetitive, the court accepted that the NCAA’s monopsony power showed that its restraints “can (and in fact do) harm competition.” This assertion ignored the efficiency justification discussed above. As the Antitrust Economists’ Brief emphasized:
[A]cting more like regulators, the lower courts treated the NCAA’s basic product design as inherently anticompetitive [so did the Supreme Court], pushing forward with a full rule of reason that sent the parties into a morass of inquiries that were not (and were never intended to be) structured to scrutinize basic product design decisions and their hypothetical alternatives. Because that inquiry was unrestrained and untethered to any input or output restraint, the application of the rule of reason in this case necessarily devolved into a quasi-regulatory inquiry, which antitrust law eschews.
Having decided that a “full” rule of reason analysis is appropriate, the Supreme Court, in effect, imposed a “least restrictive means” test on the restrictions under review, while purporting not to do so. (“We agree with the NCAA’s premise that antitrust law does not require businesses to use anything like the least restrictive means of achieving legitimate business purposes.”) The court concluded that “it was only after finding the NCAA’s restraints ‘patently and inexplicably stricter than is necessary’ to achieve the procompetitive benefits the league had demonstrated that the district court proceeded to declare a violation of the Sherman Act.” Effectively, however, this statement deferred to the lower court’s second-guessing of the means employed by the NCAA to preserve consumer demand, which the lower court did without any empirical basis.
The Supreme Court also approved the district court’s rejection of the NCAA’s view of what amateurism requires. It stressed the district court’s findings that “the NCAA’s rules and restrictions on compensation have shifted markedly over time” (seemingly a reasonable reaction to changes in market conditions) and that the NCAA developed the restrictions at issue without any reference to “considerations of consumer demand” (a de facto regulatory mandate directed at the NCAA). The Supreme Court inexplicably dubbed these lower court actions “a straightforward application of the rule of reason.” These actions seem more like blind deference to rather arbitrary judicial second-guessing of the expert party with the greatest interest in satisfying consumer demand.
The Supreme Court ended its misbegotten commentary on “less restrictive alternatives” by first claiming that it agreed that “antitrust courts must give wide berth to business judgments before finding liability.” The court asserted that the district court honored this and other principles of judicial humility because it enjoined restraints on education-related benefits “only after finding that relaxing these restrictions would not blur the distinction between college and professional sports and thus impair demand – and only finding that this course represented a significantly (not marginally) less restrictive means of achieving the same procompetitive benefits as the NCAA’s current rules.” This lower court finding once again was not based on an empirical analysis of procompetitive benefits under different sets of rules. It was little more than the personal opinion of a judge, who lacked the NCAA’s knowledge of relevant markets and expertise. That the Supreme Court accepted it as an exercise in restrained judicial analysis is well nigh inexplicable.
The Antitrust Economists’ Brief, unlike the Supreme Court, enunciated the correct approach to judicial rewriting of core NCAA joint venture rules:
The institutions that are members of the NCAA want to offer a particular type of athletic product—an amateur athletic product that they believe is consonant with their primary academic missions. By doing so, as th[e] [Supreme] Court has [previously] recognized [in its 1984 NCAA v. Board of Regents decision], they create a differentiated offering that widens consumer choice and enhances opportunities for student-athletes. NCAA, 468 U.S. at 102. These same institutions have drawn lines that they believe balance their desire to foster intercollegiate athletic competition with their overarching academic missions. Both the district court and the Ninth Circuit have now said that they may not do so, unless they draw those lines differently. Yet neither the district court nor the Ninth Circuit determined that the lines drawn reduce the output of intercollegiate athletics or ascertained whether their judicially-created lines would expand that output. That is not the function of antitrust courts, but of legislatures.
Other Harms the Court Failed to Consider
Finally, the court failed to consider other harms that stem from a presumptive suspicion of NCAA restrictions on athletic compensation in general. The elimination of compensation rules should favor large well-funded athletic programs over others, potentially undermining “competitive balance” among schools. (Think of an NCAA March Madness tournament where “Cinderella stories” are eliminated, as virtually all the talented players have been snapped up by big name schools.) It could also, through the reallocation of income to “big name big sports” athletes who command a bidding premium, potentially reduce funding support for “minor college sports” that provide opportunities to a wide variety of student-athletes. This would disadvantage those athletes, undermine the future of “minor” sports, and quite possibly contribute to consumer disillusionment and unhappiness (think of the millions of parents of “minor sports” athletes).
What’s more, the existing rules allow many promising but non-superstar athletes to develop their skills over time, enhancing their ability to eventually compete at the professional level. (This may even be the case for some superstars, who may obtain greater long-term financial rewards by refining their talents and showcasing their skills for a year or two in college.) In addition, the current rules climate allows many student athletes who do not turn professional to develop personal connections that serve them well in their professional and personal lives, including connections derived from the “brand” of their university. (Think of wealthy and well-connected alumni who are ardent fans of their colleges’ athletic programs.) In a world without NCAA amateurism rules, the value of these experiences and connections could wither, to the detriment of athletes and consumers alike. (Consistent with my conclusion, economists Richard McKenzie and Dwight Lee have argued against the proposition that “college athletes are materially ‘underpaid’ and are ‘exploited’”.)
This “parade of horribles” might appear unlikely in the short term. Nevertheless, in the course of time, the inability of the NCAA to control the attributes of its product, due to a changed legal climate, make it all too real. This is especially the case in light of Justice Kavanaugh’s strong warning that other NCAA compensation restrictions are likely indefensible. (As he bluntly put it, venerable college sports “traditions alone cannot justify the NCAA’s decision to build a massive money-raising enterprise on the backs of student athletes who are not fairly compensated. . . . The NCAA is not above the law.”)
The Supreme Court’s misguided Alston decision fails to weigh the powerful efficiency justifications for the NCAA’s amateurism rules. This holding virtually invites other lower courts to ignore efficiencies and to second guess decisions that go to the heart of the NCAA’s joint venture product offering. The end result is likely to reduce consumer welfare and, quite possibly, the welfare of many student athletes as well. One would hope that Congress, if it chooses to address NCAA rules, will keep these dangers well in mind. A statutory change not directed solely at the NCAA, creating a rebuttable presumption of legality for restraints that go to the heart of a lawful joint venture, may merit serious consideration.
U.S. antitrust law is designed to protect competition, not individual competitors. That simple observation lies at the heart of the Consumer Welfare Standard that for years has been the cornerstone of American antitrust policy. An alternative enforcement policy focused on protecting individual firms would discourage highly efficient and innovative conduct by a successful entity, because such conduct, after all, would threaten to weaken or displace less efficient rivals. The result would be markets characterized by lower overall levels of business efficiency and slower innovation, yielding less consumer surplus and, thus, reduced consumer welfare, as compared to the current U.S. antitrust system.
The U.S. Supreme Court gets it. In Reiter v. Sonotone (1979), the court stated plainly that “Congress designed the Sherman Act as a ‘consumer welfare prescription.’” Consistent with that understanding, the court subsequently stressed in Spectrum Sports v. McQuillan (1993) that “[t]he purpose of the [Sherman] Act is not to protect businesses from the working of the market, it is to protect the public from the failure of the market.” This means that a market leader does not have an antitrust duty to assist its struggling rivals, even if it is flouting a regulatory duty to deal. As a unanimous Supreme Court held in Verizon v. Trinko (2004): “Verizon’s alleged insufficient assistance in the provision of service to rivals [in defiance of an FCC-imposed regulatory obligation] is not a recognized antitrust claim under this Court’s existing refusal-to-deal precedents.”
Unfortunately, the New York State Senate seems to have lost sight of the importance of promoting vigorous competition and consumer welfare, not competitor welfare, as the hallmark of American antitrust jurisprudence. The chamber on June 7 passed the ill-named 21st Century Antitrust Act (TCAA), legislation that, if enacted and signed into law, would seriously undermine consumer welfare and innovation. Let’s take a quick look at the TCAA’s parade of horribles.
The TCAA makes it unlawful for any person “with a dominant position in the conduct of any business, trade or commerce, in any labor market, or in the furnishing of any service in this state to abuse that dominant position.”
A “dominant position” may be established through “direct evidence” that “may include, but is not limited to, the unilateral power to set prices, terms, power to dictate non-price contractual terms without compensation; or other evidence that a person is not constrained by meaningful competitive pressures, such as the ability to degrade quality without suffering reduction in profitability. In labor markets, direct evidence of a dominant position may include, but is not limited to, the use of non-compete clauses or no-poach agreements, or the unilateral power to set wages.”
The “direct evidence” language is unbounded and hopelessly vague. What does it mean to not be “constrained by meaningful competitive pressures”? Such an inherently subjective characterization would give prosecutors carte blanche to find dominance. What’s more, since “no court shall require definition of a relevant market” to find liability in the face of “direct evidence,” multiple competitors in a vigorously competitive market might be found “dominant.” Thus, for example, the ability of a firm to use non-compete clauses or no-poach agreements for efficient reasons (such as protecting against competitor free-riding on investments in human capital or competitor theft of trade secrets) would be undermined, even if it were commonly employed in a market featuring several successful and aggressive rivals.
“Indirect evidence” based on market share also may establish a dominant position under the TCAA. Dominance would be presumed if a competitor possessed a market “share of forty percent or greater of a relevant market as a seller” or “thirty percent or greater of a relevant market as a buyer”.
Those numbers are far below the market ranges needed to find a “monopoly” under Section 2 of the Sherman Act. Moreover, given inevitable error associated with both market definitions and share allocations—which, in any event, may fluctuate substantially—potential arbitrariness would attend share based-dominance calculations. Most significantly, of course, market shares may say very little about actual market power. Where entry barriers are low and substitutes wait in the wings, a temporarily large market share may not bestow any ability on a “dominant” firm to exercise power over price or to exclude competitors.
In short, it would be trivially easy for non-monopolists possessing very little, if any, market power to be characterized as “dominant” under the TCAA, based on “direct evidence” or “indirect evidence.”
Once dominance is established, what constitutes an abuse of dominance? The TCAA states that an “abuse of a dominant position may include, but is not limited to, conduct that tends to foreclose or limit the ability or incentive of one or more actual or potential competitors to compete, such as leveraging a dominant position in one market to limit competition in a separate market, or refusing to deal with another person with the effect of unnecessarily excluding or handicapping actual or potential competitors.” In addition, “[e]vidence of pro-competitive effects shall not be a defense to abuse of dominance and shall not offset or cure competitive harm.”
This language is highly problematic. Effective rivalrous competition by its very nature involves behavior by a firm or firms that may “limit the ability or incentive” of rival firms to compete. For example, a company’s introduction of a new cost-reducing manufacturing process, or of a patented product improvement that far surpasses its rivals’ offerings, is the essence of competition on the merits. Nevertheless, it may limit the ability of its rivals to compete, in violation of the TCAA. Moreover, so-called “monopoly leveraging” typically generates substantial efficiencies, and very seldom undermines competition (see here, for example), suggesting that (at best) leveraging theories would generate enormous false positives in prosecution. The TCAA’s explicit direction that procompetitive effects not be considered in abuse of dominance cases further detracts from principled enforcement; it denigrates competition, the very condition that American antitrust law has long sought to promote.
Put simply, under the TCAA, “dominant” firms engaging in normal procompetitive conduct could be held liable (and no doubt frequently would be held liable, given their inability to plead procompetitive justifications) for “abuses of dominance.” To top it off, firms convicted of abusing a dominant position would be liable for treble damages. As such, the TCAA would strongly disincentivize aggressive competitive behavior that raises consumer welfare.
The TCAA’s negative ramifications would be far-reaching. By embracing a civil law “abuse of dominance” paradigm, the TCAA would run counter to a longstanding U.S. common law antitrust tradition that largely gives free rein to efficiency-seeking competition on the merits. It would thereby place a new and unprecedented strain on antitrust federalism. In a digital world where the effects of commercial conduct frequently are felt throughout the United States, the TCAA’s attack on efficient welfare-inducing business practices would have national (if not international) repercussions.
The TCAA would alter business planning calculations for the worse and could interfere directly in the setting of national antitrust policy through congressional legislation and federal antitrust enforcement initiatives. It would also signal to foreign jurisdictions that the United States’ long-expressed staunch support for reliance on the Consumer Welfare Standard as the touchtone of sound antitrust enforcement is no longer fully operative.
Judge Richard Posner is reported to have once characterized state antitrust enforcers as “barnacles on the ship of federal antitrust” (see here). The TCAA is more like a deadly torpedo aimed squarely at consumer welfare and the American common law antitrust tradition. Let us hope that the New York State Assembly takes heed and promptly rejects the TCAA.
Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms.
Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services.
All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.
The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.
In general, the bills are misguided for three main reasons.
One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars).
Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.
Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business.
The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.
Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including:
Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
Conditioning access or status on purchasing other products or services from the platform;
Using user data to support the platform’s own products in ways not extended to competitors;
Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
Restricting platform users from uninstalling software pre-installed on the platform;
Restricting platform users from providing links to facilitate business off of the platform;
Preferencing the platform’s own products or services in search results or rankings;
Interfering with how a dependent business prices its products;
Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
Retaliating against users who raise concerns with law enforcement about potential violations of the act.
On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.
Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face.
It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system.
This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.
Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).
Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.
This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors.
Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.
Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position.
So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.
Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability.
It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system.
Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator.
In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.
A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.
Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million.
In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places.
It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful.
It’s a telecom tale as old as time: industry gets a prime slice of radio spectrum and falls in love with it, only to take it for granted. Then, faced with the reapportionment of that spectrum, it proceeds to fight tooth and nail (and law firm) to maintain the status quo.
In that way, the decision by the Intelligent Transportation Society of America (ITSA) and the American Association of State Highway and Transportation Officials (AASHTO) to seek judicial review of the Federal Communications Commission’s (FCC) order reassigning the 5.9GHz band was right out of central casting. But rather than simply asserting that the FCC’s order was arbitrary, ITSA foreshadowed many of the arguments that it intends to make against the order.
There are three arguments of note, and should ITSA win on the merits of any of those arguments, it would mark a significant departure from the way spectrum is managed in the United States.
First, ITSA asserts that the U.S. Department of Transportation (DOT), by virtue of its role as the nation’s transportation regulator, retains authority to regulate radio spectrum as it pertains to DOT programs, not the FCC. Of course, this notion is absurd on its face. Congress mandated that the FCC act as the exclusive regulator of non-federal uses of wireless. This leaves the FCC free to—in the words of the Communications Act—“encourage the provision of new technologies and services to the public” and to “provide to all Americans” the best communications networks possible.
In contrast, other federal agencies with some amount of allocated spectrum each focus exclusively on a particular mission, without regard to the broader concerns of the country (including uses by sister agencies or the states). That’s why, rather than allocate the spectrum directly to DOT, the statute directs the FCC to consider allocating spectrum for Intelligent Transportation Systems and to establish the rules for their spectrum use. The statute directs the FCC to consult with the DOT, but leaves final decisions to the FCC.
Today’s crowded airwaves make it impossible to allocate spectrum for 5G, Wi-Fi 6, and other innovative uses without somehow impacting spectrum used by a federal agency. Accepting the ITSA position would fundamentally alter the FCC’s role relative to other agencies with an interest in the disposition of spectrum, rendering the FCC a vestigial regulatory backwater subject to non-expert veto. As a matter of policy, this would effectively prevent the United States from meeting the growing challenges of our exponentially increasing demand for wireless access.
It would also put us at a tremendous disadvantage relative to other countries. International coordination of wireless policy has become critical in the global economy, with our global supply chains and wireless equipment manufacturers dependent on global standards to drive economies of scale and interoperability around the globe. At the last World Radio Conference in 2019, interagency spectrum squabbling significantly undermined the U.S. negotiation efforts. If agencies actually had veto power over the FCC’s spectrum decisions, the United States would have no way to create a coherent negotiating position, let alone to advocate effectively for our national interests.
Second, though relatedly, ITSA asserts that the FCC’s engineers failed to appropriately evaluate safety impacts and interference concerns. It’s hard to see how this could be the case, given both the massive engineering record and the FCC’s globally recognized expertise in spectrum. As a general rule, the FCC leads the world in spectrum engineering (there is a reason things like mobile service and Wi-Fi started in the United States). No other federal agency (including DOT) has such extensive, varied, and lengthy experience with interference analysis. This allows the FCC to develop broadly applicable standards to protect all emergency communications. Every emergency first responder relies on this expertise every day that they use wireless communications to save lives. Here again, we see the wisdom in Congress delegating to a single expert agency the task of finding the right balance to meet all our wireless public-safety needs.
Third, the petition ambitiously asks the court to set aside all parts of the order, with the exception of the one portion that ITSA likes: freeing the top 30MHz of the band for use by C-V2X on a permanent basis. Given their other arguments, this assertion strains credulity. Either the FCC makes the decisions, or the DOT does. Giving federal agencies veto power over FCC decisions would be bad enough. Allowing litigants to play federal agencies against each other so they can mix and match results would produce chaos and/or paralysis in spectrum policy.
In short, ITSA is asking the court to fundamentally redefine the scope of FCC authority to administer spectrum when other federal agencies are involved; to undermine deference owed to FCC experts; and to do all of this while also holding that the FCC was correct on the one part of the order with which the complainants agree. This would make future progress in wireless technology effectively impossible.
We don’t let individual states decide which side of the road to drive on, or whether red or some other color traffic light means stop, because traffic rules only work when everybody follows the same rules. Wireless policy can only work if one agency makes the rules. Congress says that agency is the FCC. The courts (and other agencies) need to remember that.
John Carreyrou’s marvelous book Bad Blood chronicles the rise and fall of Theranos, the one-time Silicon Valley darling that was revealed to be a house of cards. Theranos’s Svengali-like founder, Elizabeth Holmes, convinced scores of savvy business people (mainly older men) that her company was developing a machine that could detect all manner of maladies from a small quantity of a patient’s blood. Turns out it was a fraud.
I had a couple of recurring thoughts as I read Bad Blood. First, I kept thinking about how Holmes’s fraud might impair future medical innovation. Something like Theranos’s machine would eventually be developed, I figured, but Holmes’s fraud would likely set things back by making investors leery of blood-based, multi-disease diagnostics.
I also had a thought about the causes of Theranos’s spectacular failure. A key problem, it seemed, was that the company tried to do too many things at once: develop diagnostic technologies, design an elegant machine (Holmes was obsessed with Steve Jobs and insisted that Theranos’s machine resemble a sleek Apple device), market the product, obtain regulatory approval, scale the operation by getting Theranos machines in retail chains like Safeway and Walgreens, and secure third-party payment from insurers.
A thought that didn’t occur to me while reading Bad Blood was that a multi-disease blood diagnostic system would soon be developed but would be delayed, or possibly even precluded from getting to market, by an antitrust enforcement action based on things the developers did to avoid the very problems that doomed Theranos.
Sadly, that’s where we are with the Federal Trade Commission’s misguided challenge to the merger of Illumina and Grail.
Founded in 1998, San Diego-based Illumina is a leading provider of products used in genetic sequencing and genomic analysis. Illumina produces “next generation sequencing” (NGS) platforms that are used for a wide array of applications (genetic tests, etc.) developed by itself and other companies.
In 2015, Illumina founded Grail for the purpose of developing a blood test that could detect cancer in asymptomatic individuals—the “holy grail” of cancer diagnosis. Given the superior efficacy and lower cost of treatments for early- versus late-stage cancers, success by Grail could save millions of lives and billions of dollars.
Illumina created Grail as a separate entity in which it initially held a controlling interest (having provided the bulk of Grail’s $100 million Series A funding). Legally separating Grail in this fashion, rather than running it as an Illumina division, offered a number of benefits. It limited Illumina’s liability for Grail’s activities, enabling Grail to take greater risks. It mitigated the Theranos problem of managers’ being distracted by too many tasks: Grail managers could concentrate exclusively on developing a viable cancer-screening test, while Illumina’s management continued focusing on that company’s core business. It made it easier for Grail to attract talented managers, who would rather come in as corporate officers than as division heads. (Indeed, Grail landed Jeff Huber, a high-profile Google executive, as its initial CEO.) Structuring Grail as a majority-owned subsidiary also allowed Illumina to attract outside capital, with the prospect of raising more money in the future by selling new Grail stock to investors.
In 2017, Grail did exactly that, issuing new shares to investors in exchange for $1 billion. While this capital infusion enabled the company to move forward with its promising technologies, the creation of new shares meant that Illumina no longer held a controlling interest in the firm. Its ownership interest dipped below 20 percent and now stands at about 14.5 percent of Grail’s voting shares.
Setting up Grail so as to facilitate outside capital formation and attract top managers who could focus single-mindedly on product development has paid off. Grail has now developed a blood test that, when processed on Illumina’s NGS platform, can accurately detect a number of cancers in asymptomatic individuals. Grail predicts that this “liquid biopsy,” called Galleri, will eventually be able to detect up to 50 cancers before physical symptoms manifest. Grail is also developing other blood-based cancer tests, including one that confirms cancer diagnoses in patients suspected to have cancer and another designed to detect cancer recurrence in patients who have undergone treatment.
Grail now faces a host of new challenges. In addition to continuing to develop its tests, Grail needs to:
Engage in widespread testing of its cancer-detection products on up to 50 different cancers;
Process and present the information from its extensive testing in formats that will be acceptable to regulators;
Navigate the pre-market regulatory approval process in different countries across the globe;
Secure commitments from third-party payors (governments and private insurers) to provide coverage for its tests;
Develop means of manufacturing its products at scale;
Create and implement measures to ensure compliance with FDA’s Quality System Regulation (QSR), which governs virtually all aspects of medical device production (design, testing, production, process controls, quality assurance, labeling, packaging, handling, storage, distribution, installation, servicing, and shipping); and
Market its tests to hospitals and health-care professionals.
These steps are all required to secure widespread use of Grail’s tests. And, importantly, such widespread use will actually improve the quality of the tests. Grail’s tests analyze the DNA in a patient’s blood to look for methylation patterns that are known to be associated with cancer. In essence, the tests work by comparing the methylation patterns in a test subject’s DNA against a database of genomic data collected from large clinical studies. With enough comparison data, the tests can indicate not only the presence of cancer but also where in the body the cancer signal is coming from. And because Grail’s tests use machine learning to hone their algorithms in response to new data collected from test usage, the greater the use of Grail’s tests, the more accurate, sensitive, and comprehensive they become.
To assist with the various tasks needed to achieve speedy and widespread use of its tests, Grail decided to reunite with Illumina. In September 2020, the companies entered a merger agreement under which Illumina would acquire the 85.5 percent of Grail voting shares it does not already own for cash and stock worth $7.1 billion and additional contingent payments of $1.2 billion to Grail’s non-Illumina shareholders.
Recombining with Illumina will allow Grail—which has appropriately focused heretofore solely on product development—to accomplish the tasks now required to get its tests to market. Illumina has substantial laboratory capacity that Grail can access to complete the testing needed to refine its products and establish their effectiveness. As the leading global producer of NGS platforms, Illumina has unparalleled experience in navigating the regulatory process for NGS-related products, producing and marketing those products at scale, and maintaining compliance with complex regulations like FDA’s QSR. With nearly 3,000 international employees located in 26 countries, it has obtained regulatory authorizations for NGS-based tests in more than 50 jurisdictions around the world. It also has long-standing relationships with third-party payors, health systems, and laboratory customers. Grail, by contrast, has never obtained FDA approval for any products, has never manufactured NGS-based tests at scale, has only a fledgling regulatory affairs team, and has far less extensive contacts with potential payors and customers. By remaining focused on its key objective (unlike Theranos), Grail has achieved product-development success. Recombining with Illumina will now enable it, expeditiously and efficiently, to deploy its products across the globe, generating user data that will help improve the products going forward.
In addition to these benefits, the combination of Illumina and Grail will eliminate a problem that occurs when producers of complementary products each operate in markets that are not fully competitive: double marginalization. When sellers of products that are used together each possess some market power due to a lack of competition, their uncoordinated pricing decisions may result in less surplus for each of them and for consumers of their products. Combining so that they can coordinate pricing will leave them and their customers better off.
Unlike a producer participating in a competitive market, a producer that faces little competition can enhance its profits by raising its price above its incremental cost. But there are limits on its ability to do so. As the well-known monopoly pricing model shows, even a monopolist has a “profit-maximizing price” beyond which any incremental price increase would lose money. Raising price above that level would hurt both consumers and the monopolist.
When consumers are deciding whether to purchase products that must be used together, they assess the final price of the overall bundle. This means that when two sellers of complementary products both have market power, there is an above-cost, profit-maximizing combined price for their products. If the complement sellers individually raise their prices so that the combined price exceeds that level, they will reduce their own aggregate welfare and that of their customers.
This unfortunate situation is likely to occur when market power-possessing complement producers are separate companies that cannot coordinate their pricing. In setting its individual price, each separate firm will attempt to capture as much surplus for itself as possible. This will cause the combined price to rise above the profit-maximizing level. If they could unite, the complement sellers would coordinate their prices so that the combined price was lower and the sellers’ aggregate profits higher.
Here, Grail and Illumina provide complementary products (cancer-detection tests and the NGS platforms on which they are processed), and each faces little competition. If they price separately, their aggregate prices are likely to exceed the profit-maximizing combined price for the cancer test and NGS platform access. If they combine into a single firm, that firm would maximize its profits by lowering prices so that the aggregate test/platform price is the profit-maximizing combined price. This would obviously benefit consumers.
In light of the social benefits the Grail/Illumina merger offers—speeding up and lowering the cost of getting Grail’s test approved and deployed at scale, enabling improvement of the test with more extensive user data, eliminating double marginalization—one might expect policymakers to cheer the companies’ recombination. The FTC, however, is trying to block it. In late March, the commission brought an action claiming that the merger would violate Section 7 of the Clayton Act by substantially reducing competition in a line of commerce.
The FTC’s theory is that recombining Illumina and Grail will impair competition in the market for “multi-cancer early detection” (MCED) tests. The commission asserts that the combined company would have both the opportunity and the motivation to injure rival producers of MCED tests.
The opportunity to do so would stem from the fact that MCED tests must be processed on NGS platforms, which are produced exclusively by Illumina. Illumina could charge Grail’s rivals or their customers higher prices for access to its NGS platforms (or perhaps deny access altogether) and could withhold the technical assistance rivals would need to secure both regulatory approval of their tests and coverage by third-party payors.
But why would Illumina take this tack, given that it would be giving up profits on transactions with producers and users of other MCED tests? The commission asserts that the losses a combined Illumina/Grail would suffer in the NGS platform market would be more than offset by gains stemming from reduced competition in the MCED test market. Thus, the combined company would have a motive, as well as an opportunity, to cause anticompetitive harm.
There are multiple problems with the FTC’s theory. As an initial matter, the market the commission claims will be impaired doesn’t exist. There is no MCED test market for the simple reason that there are no commercializable MCED tests. If allowed to proceed, the Illumina/Grail merger may create such a market by facilitating the approval and deployment of the first MCED test. At present, however, there is no such market, and the chances of one ever emerging will be diminished if the FTC succeeds in blocking the recombination of Illumina and Grail.
Because there is no existing market for MCED tests, the FTC’s claim that a combined Illumina/Grail would have a motivation to injure MCED rivals—potential consumers of Illumina’s NGS platforms—is rank speculation. The commission has no idea what profits Illumina would earn from NGS platform sales related to MCED tests, what profits Grail would earn on its own MCED tests, and how the total profits of the combined company would be affected by impairing opportunities for rival MCED test producers.
In the only relevant market that does exist—the cancer-detection market—there can be no question about the competitive effect of an Illumina/Grail merger: It would enhance competition by speeding the creation of a far superior offering that promises to save lives and substantially reduce health-care costs.
There is yet another problem with the FTC’s theory of anticompetitive harm. The commission’s concern that a recombined Illumina/Grail would foreclose Grail’s rivals from essential NGS platforms and needed technical assistance is obviated by Illumina’s commitments. Specifically, Illumina has irrevocably offered current and prospective oncology customers 12-year contract terms that would guarantee them the same access to Illumina’s sequencing products that they now enjoy, with no price increase. Indeed, the offered terms obligate Illumina not only to refrain from raising prices but also to lower them by at least 43% by 2025 and to provide regulatory and technical assistance requested by Grail’s potential rivals. Illumina’s continued compliance with its firm offer will be subject to regular audits by an independent auditor.
In the end, then, the FTC’s challenge to the Illumina/Grail merger is unjustified. The initial separation of Grail from Illumina encouraged the managerial focus and capital accumulation needed for successful test development. Recombining the two firms will now expedite and lower the costs of the regulatory approval and commercialization processes, permitting Grail’s tests to be widely used, which will enhance their quality. Bringing Grail’s tests and Illumina’s NGS platforms within a single company will also benefit consumers by eliminating double marginalization. Any foreclosure concerns are entirely speculative and are obviated by Illumina’s contractual commitments.
In light of all these considerations, one wonders why the FTC challenged this merger (and on a 4-0 vote) in the first place. Perhaps it was the populist forces from left and right that are pressuring the commission to generally be more aggressive in policing mergers. Some members of the commission may also worry, legitimately, that if they don’t act aggressively on a vertical merger, Congress will amend the antitrust laws in a deleterious fashion. But the commission has picked a poor target. This particular merger promises tremendous benefit and threatens little harm. The FTC should drop its challenge and encourage its European counterparts to do the same.
 If you don’t have time for Carreyrou’s book (and you should make time if you can), HBO’s Theranos documentary is pretty solid.
 This ability is market power. In a perfectly competitive market, any firm that charges an above-cost price will lose sales to rivals, who will vie for business by lowering their prices down to the level of their cost.
 Under the model, this is the price that emerges at the output level where the producer’s marginal revenue equals its marginal cost.
Economist Josh Hendrickson asserts that the Jones Act is properly understood as a Coasean bargain. In this view, the law serves as a subsidy to the U.S. maritime industry through its restriction of waterborne domestic commerce to vessels that are constructed in U.S. shipyards, U.S.-flagged, and U.S.-crewed. Such protectionism, it is argued, provides the government with ready access to these assets, rather than taking precious time to build them up during times of conflict.
We are skeptical of this characterization.
Although there is an implicit bargain behind the Jones Act, its relationship to the work of Ronald Coase is unclear. Coase is best known for his theorem on the use of bargains and exchanges to reduce negative externalities. But the negative externality is that the Jones Act attempts to address is not apparent. While it may be more efficient or effective than the government building up its own shipbuilding, vessels, and crew in times of war, that’s rather different than addressing an externality. The Jones Act may reflect an implied exchange between the domestic maritime industry and government, but there does not appear to be anything particularly Coasean about it.
Rather, close scrutiny reveals this arrangement between government and industry to be a textbook example of policy failure and rent-seeking run amok. The Jones Act is not a bargain, but a rip-off, with costs and benefits completely out of balance.
The Jones Act and National Defense
For all of the talk of the Jones Act’s critical role in national security, its contributions underwhelm. Ships offer a case in point. In times of conflict, the U.S. military’s primary sources of transport are not Jones Act vessels but government-owned ships in the Military Sealift Command and Ready Reserve Force fleets. These are further supplemented by the 60 non-Jones Act U.S.-flag commercial ships enrolled in the Maritime Security Program, a subsidy arrangement by which ships are provided $5 million per year in exchange for the government’s right to use them in time of need.
In contrast, Jones Act ships are used only sparingly. That’s understandable, as removing these vessels from domestic trade would leave a void in the country’s transportation needs not easily filled.
The law’s contributions to domestic shipbuilding are similarly meager. if not outright counterproductive. A mere two to three large, oceangoing commercial ships are delivered by U.S. shipyards per year. That’s not per shipyard, but all U.S. shipyards combined.
Given the vastly uncompetitive state of domestic shipbuilding—a predictable consequence of handing the industry a captive domestic market via the Jones Act’s U.S.-built requirement—there is a little appetite for what these shipyards produce. As Hendrickson himself points out, the domestic build provision serves to “discourage shipbuilders from innovating and otherwise pursuing cost-saving production methods since American shipbuilders do not face international competition.” We could not agree more.
What keeps U.S. shipyards active and available to meet the military’s needs is not work for the Jones Act commercial fleet but rather government orders. A 2015 Maritime Administration report found that such business accounts for 70 percent of revenue for the shipbuilding and repair industry. A 2019 American Enterprise Institute study concluded that, among U.S. shipbuilders that construct both commercial and military ships, Jones Act vessels accounted for less than 5 percent of all shipbuilding orders.
If the Jones Act makes any contributions of note at all, it is mariners. Of those needed to crew surge sealift ships during times of war, the Jones Act fleet is estimated to account for 29 percent. But here the Jones Act also acts as a double-edged sword. By increasing the cost of ships to four to five times the world price, the law’s U.S.-built requirement results in a smaller fleet with fewer mariners employed than would otherwise be the case. That’s particularly noteworthy given government calculations that there is a deficit of roughly 1,800 mariners to crew its fleet in the event of a sustained sealift operation.
Beyond its ruinous impact on the competitiveness of domestic shipbuilding, the Jones Act has had other deleterious consequences for national security. The increased cost of waterborne transport, or its outright impossibility in the case of liquefied natural gas and propane, results in reduced self-reliance for critical energy supplies. This is a sufficiently significant issue that members of the National Security Council unsuccessfully sought a long-term Jones Act waiver in 2019. The law also means fewer redundancies and less flexibility in the country’s transportation system when responding to crises, both natural and manmade. Waivers of the Jones Act can be issued, but this highly politicized process eats up precious days when time is of the essence. All of these factors merit consideration in the overall national security calculus.
To review, the Jones Act’s opaque and implicit subsidy—doled out via protectionism—results in anemic and uncompetitive shipbuilding, few ships available in time of war, and fewer mariners than would otherwise be the case without its U.S.-built requirement. And it has other consequences for national security that are not only underwhelming but plainly negative. Little wonder that Hendrickson concedes it is unclear whether U.S. maritime policy—of which the Jones Act plays a foundational role—achieves its national security goals.
The toll exacted in exchange for the Jones Act’s limited benefits, meanwhile, is considerable. According to a 2019 OECD study, the law’s repeal would increase domestic value added by $19-$64 billion. Incredibly, that estimate may actually understate matters. Not included in this estimate are related costs such as environmental degradation, increased congestion and highway maintenance, and retaliation from U.S. trade partners during free-trade agreement negotiations due to U.S. unwillingness to liberalize the Jones Act.
Against such critiques, Hendrickson posits that substantial cost savings are illusory due to immigration and other U.S. laws. But how big a barrier such laws would pose is unclear. It’s worth considering, for example, that cruise ships with foreign crews are able to visit multiple U.S. ports so long as a foreign port is also included on the voyage. The granting of Jones Act waivers, meanwhile, has enabled foreign ships to transport cargo between U.S. ports in the past despite U.S. immigration laws.
Would Chinese-flagged and crewed barges be able to engage in purely domestic trade on the Mississippi River absent the Jones Act? Almost certainly not. But it seems perfectly plausible that foreign ships already sailing between U.S. ports as part of international voyages—a frequent occurrence—could engage in cabotage movements without hiring U.S. crews. Take, for example, APL’s Eagle Express X route that stops in Los Angeles, Honolulu, and Dutch Harbor as well as Asian ports. Without the Jones Act, it’s reasonable to believe that ships operating on this route could transport goods from Los Angeles to Honolulu before continuing on to foreign destinations.
But if the Jones Act fails to meet U.S. national security benefits while imposing substantial costs, how to explain its continued survival? Hendrickson avers that the law’s longevity reflects its utility. We believe, however, that the answer lies in the application of public choice theory. Simply put, the law’s costs are both opaque and dispersed across the vast expanse of the U.S. economy while its benefits are highly concentrated. The law’s de facto subsidy is also vastly oversupplied, given that the vast majority of vessels under its protection are smaller craft such as tugboats and barges with trivial value to the country’s sealift capability. This has spawned a lobby aggressively dedicated to the Jones Act’s preservation. Washington, D.C. is home to numerousindustrygroupsandlabororganizations that regard the law’s maintenance as critical, but not a single one that views its repeal as a top priority.
It’s instructive in this regard that all four senators from Alaska and Hawaii are strongJonesActsupporters despite their states being disproportionatelyburdened by the law. This seeming oddity is explained by these states also being disproportionately home to maritime interest groups that support the law. In contrast, Jones Act critics Sen. Mike Lee and the late Sen. John McCain both hailed from land-locked states home to few maritime interest groups.
Disagreements, but also Common Ground
For all of our differences with Hendrickson, however, there is substantial common ground. We are in shared agreement that the Jones Act is suboptimal policy, that its ability to achieve its goals is unclear, and that its U.S.-built requirement is particularly ripe for removal. Where our differences lie is mostly in the scale of gains to be realized from the law’s reform or repeal. As such, there is no reason to maintain the failed status quo. The Jones Act should be repealed and replaced with targeted, transparent, and explicit subsidies to meet the country’s sealift needs. Both the country’s economy and national security would be rewarded—richly so, in our opinion—from such policy change.
Virtually all countries in the world have adopted competition laws over the last three decades. In a recent Mercatus Foundation Research Paper, I argue that the spread of these laws has benefits and risks. The abstract of my Paper states:
The United States stood virtually alone when it enacted its first antitrust statute in 1890. Today, almost all nations have adopted competition laws (the term used in most other nations), and US antitrust agencies interact with foreign enforcers on a daily basis. This globalization of antitrust is becoming increasingly important to the economic welfare of many nations, because major businesses (in particular, massive digital platforms like Google and Facebook) face growing antitrust scrutiny by multiple enforcement regimes worldwide. As such, the United States should take the lead in encouraging adoption of antitrust policies, here and abroad, that are conducive to economic growth and innovation. Antitrust policies centered on promoting consumer welfare would be best suited to advancing these desirable aims. Thus, the United States should oppose recent efforts (here and abroad) to turn antitrust into a regulatory system that seeks to advance many objectives beyond consumer welfare. American antitrust enforcers should also work with like-minded agencies—and within multilateral organizations such as the International Competition Network and the Organisation for Economic Cooperation and Development—to promote procedural fairness and the rule of law in antitrust enforcement.
A brief summary of my Paper follows.
Widespread calls for “reform” of the American antitrust laws are based on the false premises that (1) U.S. economic concentration has increased excessively and competition has diminished in recent decades; and (2) U.S. antitrust enforcers have failed to effectively enforce the antitrust laws (the consumer welfare standard is sometimes cited as the culprit to blame for “ineffective” antitrust enforcement). In fact, sound economic scholarship, some of it cited in chapter 6 of the 2020 Economic Report of the President, debunks these claims. In reality, modern U.S. antitrust enforcement under the economics-based consumer welfare standard (despite being imperfect and subject to error costs) has done a good job overall of promoting competitive and efficient markets.
The adoption of competition laws by foreign nations was promoted by the U.S. Government. The development of European competition law in the 1950s, and its incorporation into treaties that laid the foundation for the European Union (EU), was particularly significant. The EU administrative approach to antitrust, based on civil law (as compared to the U.S. common law approach), has greatly influenced the contours of most new competition laws. The EU, like the U.S., focuses on anticompetitive joint conduct, single firm conduct, and mergers. EU enforcement (carried out through the European Commission’s Directorate General for Competition) initially relied more on formal agency guidance than American antitrust law, but it began to incorporate an economic effects-based consumer welfare-centric approach over the last 20 years. Nevertheless, EU enforcers still pay greater attention to the welfare of competitors than their American counterparts.
In recent years, the EU prosecutions of digital platforms have begun to adopt a “precautionary antitrust” perspective, which seeks to prevent potential monopoly abuses in their incipiency by sanctioning business conduct without showing that it is causing any actual or likely consumer harm. What’s more, the EU’s recently adopted “Digital Markets Act” for the first time imposes ex ante competition regulation of platforms. These developments reflect a move away from a consumer welfare approach. On the plus side, the EU (unlike the U.S.) subjects state-owned or controlled monopolies to liability for anticompetitive conduct and forbids anticompetitive government subsidies that seriously distort competition (“state aids”).
Developing and former communist bloc countries rapidly enacted and implemented competition laws over the last three decades. Many newly minted competition agencies suffer from poor institutional capacity. The U.S. Government and the EU have worked to enhance the quality and consistency of competition enforcement in these jurisdictions by supporting technical support and training.
Various institutions support efforts to improve competition law enforcement and develop support for a “competition culture.” The International Competition Network (ICN), established in 2001, is a “virtual network” comprised of almost all competition agencies. The ICN focuses on discrete projects aimed at procedural and substantive competition law convergence through the development of consensual, nonbinding “best practices” recommendations and reports. It also provides a significant role for nongovernmental advisers from the business, legal, economic, consumer, and academic communities, as well as for experts from other international organizations. ICN member agency staff are encouraged to communicate with each other about the fundamentals of investigations and evaluations and to use ICN-generated documents and podcasts to support training. The application of economic analysis to case-specific facts has been highlighted in ICN work product. The Organization for Economic Cooperation and Development (OECD) and the World Bank (both of which carry out economics-based competition policy research) have joined with the ICN in providing national competition agencies (both new and well established) with the means to advocate effectively for procompetitive, economically beneficial government policies. ICN and OECD “toolkits” provide strategies for identifying and working to dislodge (or not enact) anticompetitive laws and regulations that harm the economy.
While a fair degree of convergence has been realized, substantive uniformity among competition law regimes has not been achieved. This is not surprising, given differences among jurisdictions in economic development, political organization, economic philosophy, history, and cultural heritage—all of which may help generate a multiplicity of policy goals. In addition to consumer welfare, different jurisdictions’ competition laws seek to advance support for small and medium sized businesses, fairness and equality, public interest factors, and empowerment of historically disadvantaged persons, among other outcomes. These many goals may not take center stage in the evaluation of most proposed mergers or restrictive business arrangements, but they may affect the handling of particular matters that raise national sensitivities tied to the goals.
The spread of competition law worldwide has generated various tangible benefits. These include consensus support for combating hard core welfare-reducing cartels, fruitful international cooperation among officials dedicated to a pro-competition mission, and support for competition advocacy aimed at dismantling harmful government barriers to competition.
There are, however, six other factors that raise questions regarding whether competition law globalization has been cost-beneficial overall: (1) effective welfare-enhancing antitrust enforcement is stymied in jurisdictions where the rule of law is weak and private property is poorly protected; (2) high enforcement error costs (particularly in jurisdictions that consider factors other than consumer welfare) may undermine the procompetitive features of antitrust enforcement efforts; (3) enforcement demands by multiple competition authorities substantially increase the costs imposed on firms that are engaging in multinational transactions; (4) differences among national competition law rules create complications for national agencies as they seek to have their laws vindicated while maintaining good cooperative relationships with peer enforcers; (5) anticompetitive rent-seeking by less efficient rivals may generate counterproductive prosecutions of successful companies, thereby disincentivizing welfare-inducing business behavior; and (6) recent developments around the world suggest that antitrust policy directed at large digital platforms (and perhaps other dominant companies as well) may be morphing into welfare-inimical regulation. These factors are discussed at greater length in my paper.
One cannot readily quantify the positive and negative welfare effects of the consequences of competition law globalization. Accordingly, one cannot state with any degree of confidence whether globalization has been “good” or “bad” overall in terms of economic welfare.
The extent to which globalized competition law will be a boon to consumers and the global economy will depend entirely on the soundness of public policy decision-making. The U.S. Government should take the lead in advancing a consumer welfare-centric competition policy at home and abroad. It should work with multilateral institutions and engage in bilateral and regional cooperation to support the rule of law, due process, and antitrust enforcement centered on the consumer welfare standard.
The European Commission recently issued a formal Statement of Objections (SO) in which it charges Apple with antitrust breach. In a nutshell, the commission argues that Apple prevents app developers—in this case, Spotify—from using alternative in-app purchase systems (IAPs) other than Apple’s own, or steering them towards other, cheaper payment methods on another site. This, the commission says, results in higher prices for consumers in the audio streaming and ebook/audiobook markets.
More broadly, the commission claims that Apple’s App Store rules may distort competition in markets where Apple competes with rival developers (such as how Apple Music competes with Spotify). This explains why the anticompetitive concerns raised by Spotify regarding the Apple App Store rules have now expanded to Apple’s e-books, audiobooks and mobile payments platforms.
However, underlying market realities cast doubt on the commission’s assessment. Indeed, competition from Google Play and other distribution mediums makes it difficult to state unequivocally that the relevant market should be limited to Apple products. Likewise, the conduct under investigation arguably solves several problems relating to platform dynamics, and consumers’ privacy and security.
Should the relevant market be narrowed to iOS?
An important first question is whether there is a distinct, antitrust-relevant market for “music streaming apps distributed through the Apple App Store,” as the EC posits.
This market definition is surprising, given that it is considerably narrower than the one suggested by even the most enforcement-minded scholars. For instance, Damien Geradin and Dimitrias Katsifis—lawyers for app developers opposed to Apple—define the market as “that of app distribution on iOS devices, a two-sided transaction market on which Apple has a de facto monopoly.” Similarly, a report by the Dutch competition authority declared that the relevant market was limited to the iOS App Store, due to the lack of interoperability with other systems.
The commission’s decisional practice has been anything but constant in this space. In the Apple/Shazam and Apple/Beats cases, it did not place competing mobile operating systems and app stores in separate relevant markets. Conversely, in the Google Android decision, the commission found that the Android OS and Apple’s iOS, including Google Play and Apple’s App Store, did not compete in the same relevant market. The Spotify SO seems to advocate for this definition, narrowing it even further to music streaming services.
However, this narrow definition raises several questions. Market definition is ultimately about identifying the competitive constraints that the firm under investigation faces. As Gregory Werden puts it: “the relevant market in an antitrust case […] identifies the competitive process alleged to be harmed.”
In that regard, there is clearly somecompetition between Apple’s App Store, Google Play and other app stores (whether this is sufficient to place them in the same relevant market is an empirical question).
This view is supported by the vast number of online posts comparing Android and Apple and advising consumers on their purchasing options. Moreover, the growth of high-end Android devices that compete more directly with the iPhone has reinforced competition between the two firms. Likewise, Apple has moved down the value chain; the iPhone SE, priced at $399, competes with other medium-range Android devices.
App developers have also suggested they view Apple and Android as alternatives. They take into account technical differences to decide between the two, meaning that these two platforms compete with each other for developers.
All of this suggests that the App Store may be part of a wider market for the distribution of apps and services, where Google Play and other app stores are included—though this is ultimately an empirical question (i.e., it depends on the degree of competition between both platforms)
If the market were defined this way, Apple would not even be close to holding a dominant position—a prerequisite for European competition intervention. Indeed, Apple only sold 27.43% of smartphones in March 2021. Similarly, only 30.41% of smartphones in use run iOS, as of March 2021. This is well below the lowest market share in a European abuse of dominance—39.7% in the British Airways decision.
The sense that Apple and Android compete for users and developers is reinforced by recent price movements. Apple dropped its App Store commission fees from 30% to 15% in November 2020 and Google followed suit in March 2021. This conduct is consistent with at least some degree of competition between the platforms. It is worth noting that other firms, notably Microsoft, have so far declined to follow suit (except for gaming apps).
Barring further evidence, neither Apple’s market share nor its behavior appear consistent with the commission’s narrow market definition.
Are Apple’s IAP system rules and anti-steering provisions abusive?
The commission’s case rests on the idea that Apple leverages its IAP system to raise the costs of rival app developers:
“Apple’s rules distort competition in the market for music streaming services by raising the costs of competing music streaming app developers. This in turn leads to higher prices for consumers for their in-app music subscriptions on iOS devices. In addition, Apple becomes the intermediary for all IAP transactions and takes over the billing relationship, as well as related communications for competitors.”
However, expropriating rents from these developers is not nearly as attractive as it might seem. The report of the Dutch competition notes that “attracting and maintaining third-party developers that increase the value of the ecosystem” is essential for Apple. Indeed, users join a specific platform because it provides them with a wide number of applications they can use on their devices. And the opposite applies to developers. Hence, the loss of users on either or both sides reduces the value provided by the Apple App Store. Following this logic, it would make no sense for Apple to systematically expropriate developers. This might partly explain why Apple’s fees are only 30%-15%, since in principle they could be much higher.
It is also worth noting that Apple’s curated App Store and IAP have several redeeming virtues. Apple offers “a highly curated App Store where every app is reviewed by experts and an editorial team helps users discover new apps every day.”While this has arguably turned the App Store into a relatively closed platform, it provides users with the assurance that the apps they find there will meet a standard of security and trustworthiness.
As noted by the Dutch competition authority, “one of the reasons why the App Store is highly valued is because of the strict review process. Complaints about malware spread via an app downloaded in the App Store are rare.” Apple provides users with a special degree of privacy and security. Indeed, Apple stopped more than $1.5 billion in potentially fraudulent transactions in 2020, proving that the security protocols are not only necessary, but also effective. In this sense, the App Store Review Guidelines are considered the first line of defense against fraud and privacy breaches.
It is also worth noting that Apple only charges a nominal fee for iOS developer kits and no fees for in-app advertising. The IAP is thus essential for Apple to monetize the platform and to cover the costs associated with running the platform (note that Apple does make money on device sales, but that revenue is likely constrained by competition between itself and Android). When someone downloads Spotify from the App Store, Apple does not get paid, but Spotify does get a new client. Thus, while independent developers bear the costs of the app fees, Apple bears the costs and risks of running the platform itself.
For instance, Apple’s App Store Team is divided into smaller teams: the Editorial Design team, the Business Operations team, and the Engineering R&D team. These teams each have employees, budgets, and resources for which Apple needs to pay. If the revenues stopped, one can assume that Apple would have less incentive to sustain all these teams that preserve the App Store’s quality, security, and privacy parameters.
Indeed, the IAP system itself provides value to the Apple App Store. Instead of charging all of the apps it provides, it takes a share of the income from some of them. As a result, large developers that own in-app sales contribute to the maintenance of the platform, while smaller ones are still offered to consumers without having to contribute economically. This boosts Apple’s App Store diversity and supply of digital goods and services.
If Apple was forced to adopt another system, it could start charging higher prices for access to its interface and tools, leading to potential discrimination against the smaller developers. Or, Apple could increase the prices of handset devices, thus incurring higher costs for consumers who do not purchase digital goods. Therefore, there are no apparent alternatives to the current IAP that satisfy the App Store’s goals in the same way.
As the Apple Review Guidelines emphasize, “for everything else there is always the open Internet.” Netflix and Spotify have ditched the subscription options from their app, and they are still among the top downloaded apps in iOS. The IAP system is therefore not compulsory to be successful in Apple’s ecosystem, and developers are free to drop Apple Review Guidelines.
The commission’s case against Apple is based on shaky foundations. Not only is the market definition extremely narrow—ignoring competition from Android, among others—but the behavior challenged by the commission has a clear efficiency-enhancing rationale. Of course, both of these critiques ultimately boil down to empirical questions that the commission will have overcome before it reaches a final decision. In the meantime, the jury is out.
AT&T’s $102 billion acquisition of Time Warner in 2019 will go down in M&A history as an exceptionally ill-advised transaction, resulting in the loss of tens of billions of dollars of shareholder value. It should also go down in history as an exceptional ill-chosen target of antitrust intervention. The U.S. Department of Justice, with support from many academic and policy commentators, asserted with confidence that the vertical combination of these content and distribution powerhouses would result in an entity that could exercise market power to the detriment of competitors and consumers.
The chorus of condemnation continued with vigor even after the DOJ’s loss in court and AT&T’s consummation of the transaction. With AT&T’s May 17 announcement that it will unwind the two-year-old acquisition and therefore abandon its strategy to integrate content and distribution, it is clear these predictions of impending market dominance were unfounded.
This widely shared overstatement of antitrust risk derives from a simple but fundamental error: regulators and commentators were looking at the wrong market.
The DOJ’s Antitrust Case against the Transaction
The business case for the AT&T/Time Warner transaction was straightforward: it promised to generate synergies by combining a leading provider of wireless, broadband, and satellite television services with a leading supplier of video content. The DOJ’s antitrust case against the transaction was similarly straightforward: the combined entity would have the ability to foreclose “must have” content from other “pay TV” (cable and satellite television) distributors, resulting in adverse competitive effects.
This foreclosure strategy was expected to take two principal forms. First, AT&T could temporarily withhold (or threaten to withhold) content from rival distributors absent payment of a higher carriage fee, which would then translate into higher fees for subscribers. Second, AT&T could permanently withhold content from rival distributors, who would then lose subscribers to AT&T’s DirectTV satellite television service, further enhancing AT&T’s market power.
Many commentators, both in the trade press and significant portions of the scholarly community, characterized the transaction as posing a high-risk threat to competitive conditions in the pay TV market. These assertions reflected the view that the new entity would exercise a bottleneck position over video-content distribution in the pay TV market and would exercise that power to impose one-sided terms to the detriment of content distributors and consumers.
Notwithstanding this bevy of endorsements, the DOJ’s case was rejected by the district court and the decision was upheld by the D.C. appellate court. The district judge concluded that the DOJ had failed to show that the combined entity would exercise any credible threat to withhold “must have” content from distributors. A key reason: the lost carriage fees AT&T would incur if it did withhold content were so high, and the migration of subscribers from rival pay TV services so speculative, that it would represent an obviously irrational business strategy. In short: no sophisticated business party would ever take AT&T’s foreclosure threat seriously, in which case the DOJ’s predictions of market power were insufficiently compelling to justify the use of government power to block the transaction.
The Fundamental Flaws in the DOJ’s Antitrust Case
The logical and factual infirmities of the DOJ’s foreclosure hypothesis have been extensively and ably covered elsewhere and I will not repeat that analysis. Following up on my previous TOTM commentary on the transaction, I would like to emphasize the point that the DOJ’s case against the transaction was flawed from the outset for two more fundamental reasons.
False Assumption #1
The assumption that the combined entity could withhold so-called “must have” content to cause significant and lasting competitive injury to rival distributors flies in the face of market realities. Content is an abundant, renewable, and mobile resource. There are few entry barriers to the content industry: a commercially promising idea will likely attract capital, which will in turn secure the necessary equipment and personnel for production purposes. Any rival distributor can access a rich menu of valuable content from a plethora of sources, both domestically and worldwide, each of which can provide new content, as required. Even if the combined entity held a license to distribute purportedly “must have” content, that content would be up for sale (more precisely, re-licensing) to the highest bidder as soon as the applicable contract term expired. This is not mere theorizing: it is a widely recognized feature of the entertainment industry.
False Assumption #2
Even assuming the combined entity could wield a portfolio of “must have” content to secure a dominant position in the pay TV market and raise content acquisition costs for rival pay TV services, it still would lack any meaningful pricing power in the relevant consumer market. The reason: significant portions of the viewing population do not want any pay TV or only want dramatically “slimmed-down” packages. Instead, viewers increasingly consume content primarily through video-streaming services—a market in which platforms such as Amazon and Netflix already enjoyed leading positions at the time of the transaction. Hence, even accepting the DOJ’s theory that the combined entity could somehow monopolize the pay TV market consisting of cable and satellite television services, the theory still fails to show any reasonable expectation of anticompetitive effects in the broader and economically relevant market comprising pay TV and streaming services. Any attempt to exercise pricing power in the pay TV market would be economically self-defeating, since it would likely prompt a significant portion of consumers to switch to (or start to only use) streaming services.
The Antitrust Case for the Transaction
When properly situated within the market that was actually being targeted in the AT&T/Time Warner acquisition, the combined entity posed little credible threat of exercising pricing power. To the contrary, the combined entity was best understood as an entrant that sought to challenge the two pioneer entities—Amazon and Netflix—in the “over the top” content market.
Each of these incumbent platforms individually had (and have) multi-billion-dollar content production budgets that rival or exceed the budgets of major Hollywood studios and enjoy worldwide subscriber bases numbering in the hundreds of millions. If that’s not enough, AT&T was not the only entity that observed the displacement of pay TV by streaming services, as illustrated by the roughly concurrent entry of Disney’s Disney+ service, Apple’s Apple TV+ service, Comcast NBCUniversal’s Peacock service, and others. Both the existing and new competitors are formidable entities operating in a market with formidable capital requirements. In 2019, Netflix, Amazon, and Apple TV expended approximately $15 billion, $6 billion, and again, $6 billion, respectively, on content; by contrast, HBO Max, AT&T’s streaming service, expended approximately $3.5 billion.
In short, the combined entity faced stiff competition from existing and reasonably anticipated competitors, requiring several billions of dollars on “content spend” to even stay in the running. Far from being able to exercise pricing power in an imaginary market defined by DOJ litigators for strategic purposes, the AT&T/Time Warner entity faced the challenge of merely surviving in a real-world market populated by several exceptionally well-financed competitors. At best, the combined entity “threatened” to deliver incremental competitive benefits by adding a robust new platform to the video-streaming market; at worst, it would fail in this objective and cause no incremental competitive harm. As it turns out, the latter appears to be the case.
The Enduring Virtues of Antitrust Prudence
AT&T’s M&A fiasco has important lessons for broader antitrust debates about the evidentiary standards that should be applied by courts and agencies when assessing alleged antitrust violations, in general, and vertical restraints, in particular.
Among some scholars, regulators, and legislators, it has become increasingly received wisdom that prevailing evidentiary standards, as reflected in federal case law and agency guidelines, are excessively demanding, and have purportedly induced chronic underenforcement. It has been widely asserted that the courts’ and regulators’ focus on avoiding “false positives” and the associated costs of disrupting innocuous or beneficial business practices has resulted in an overly cautious enforcement posture, especially with respect to mergers and vertical restraints.
In fact, these views were expressed by some commentators in endorsing the antitrust case against the AT&T/Time-Warner transaction. Some legislators have gone further and argued for substantial amendments to the antitrust law to provide enforcers and courts with greater latitude to block or re-engineer combinations that would not pose sufficiently demonstrated competitive risks under current statutory or case law.
The swift downfall of the AT&T/Time-Warner transaction casts great doubt on this critique and accompanying policy proposals. It was precisely the district court’s rigorous application of those “overly” demanding evidentiary standards that avoided what would have been a clear false-positive error. The failure of the “blockbuster” combination to achieve not only market dominance, but even reasonably successful entry, validates the wisdom of retaining those standards.
The fundamental mismatch between the widely supported antitrust case against the transaction and the widely overlooked business realities of the economically relevant consumer market illustrates the ease with which largely theoretical and decontextualized economic models of competitive harm can lead to enforcement actions that lack any reasonable basis in fact.