Search Results For coase

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Tim Brennan, (Professor, Economics & Public Policy, University of Maryland; former FCC; former FTC).]

Thinking about how to think about the coronavirus situation I keep coming back to three economic ideas that seem distinct but end up being related. First, a back of the envelope calculation suggests shutting down the economy for a while to reduce the spread of Covid-19. This leads to my second point, that political viability, if not simple fairness, dictates that the winners compensate the losers. The extent of both of these forces my main point, to understand why we can’t just “get the prices right” and let the market take care of it. Insisting that the market works in this situation could undercut the very strong arguments for why we should defer to markets in the vast majority of circumstances.

Is taking action worth it?

The first question is whether shutting down the economy to reduce the spread of Covid-19 is a good bet. Being an economist, I turn to benefit-cost analysis (BCA). All I can offer here is a back-of-the-envelope calculation, which may be an insult to envelopes. (This paper has a more serious calculation with qualitatively similar findings.) With all caveats recognized, the willingness to pay of an average person in the US to social distancing and closure policies, WTP, is

        WTP = X% times Y% times VSL,

where X% is the fraction of the population that might be seriously affected, Y% is the reduction in the likelihood of death for this population from these policies, and VSL is the “value of statistical life” used in BCA calculations, in the ballpark of $9.5M.

For X%, take the percentage of the population over 65 (a demographic including me). This is around 16%. I’m not an epidemiologist, so for Y%, the reduced likelihood of death (either from reduced transmission or reduced hospital overload), I can only speculate. Say it’s 1%, which naively seems pretty small. Even with that, the average willingness to pay would be

        WTP = 16% times 1% times $9.5M = $15,200.

Multiply that by a US population of roughly 330M gives a total national WTP of just over $5 trillion, or about 23% of GDP. Using conventional measures, this looks like a good trade in an aggregate benefit-cost sense, even leaving out willingness to pay to reduce the likelihood of feeling sick and the benefits to those younger than 65. Of course, among the caveats is not just whether to impose distancing and closures, but how long to have them (number of weeks), how severe they should be (gathering size limits, coverage of commercial establishments), and where they should be imposed (closing schools, colleges).  

Actual, not just hypothetical, compensation

The justification for using BCA is that the winners could compensate the losers. In the coronavirus setting, the equity considerations are profound. Especially when I remember that GDP is not a measure of consumer surplus, I ask myself how many months of the disruption (and not just lost wages) from unemployment should low-income waiters, cab drivers, hotel cleaners, and the like bear to reduce my over-65 likelihood of dying. 

Consequently, an important component of this policy to respect equity and quite possibly obtaining public acceptance is that the losers be compensated. In that respect, the justification for packages such as the proposal working (as I write) through Congress is not stimulus—after all, it’s  harder to spend money these days—as much as compensating those who’ve lost jobs as a result of this policy. Stimulus can come when the economy is ready to be jump-started.

Markets don’t always work, perhaps like now 

This brings me to a final point—why is this a public policy matter? My answer to almost any policy question is the glib “just get the prices right and the market will take care of it.” That doesn’t seem all that popular now. Part of that is the politics of fairness: Should the wealthy get the ventilators? Should hoarding of hand sanitizer be rewarded? But much of it may be a useful reminder that markets do not work seamlessly and instantaneously, and may not be the best allocation mechanism in critical times.

That markets are not always best should be a familiar theme to TOTM readers. The cost of using markets is the centerpiece for Ronald Coase’s 1937 Nature of the Firm and 1960 Problem of Social Cost justification for allocation through the courts. Many of us, including me on TOTM, have invoked these arguments to argue against public interventions in the structure of firms, particularly antitrust actions regarding vertical integration. Another common theme is that the common law tends toward efficiency because of the market-like evolutionary processes in property, tort, and contract case law.

This perspective is a useful reminder that the benefits of markets should always be “compared to what?” In one familiar case, the benefits of markets are clear when compared to the snail’s pace, limited information, and political manipulability of administrative price setting. But when one is talking about national emergencies and the inelastic demands, distributional consequences, and the lack of time for the price mechanism to work its wonders, one can understand and justify the use of the plethora of mandates currently imposed or contemplated. 

The common law also appears not to be a good alternative. One can imagine the litigation nightmare if everyone who got the virus attempted to identify and sue some defendant for damages. A similar nightmare awaits if courts were tasked with determning how the risk of a pandemic would have been allocated were contracts ideal.

Much of this may be belaboring the obvious. My concern is that if those of us who appreciate the virtues of markets exaggerate their applicability, those skeptical of markets may use this episode to say that markets inherently fail and more of the economy should be publicly administered. Better to rely on facts rather than ideology, and to regard the current situation as the awful but justifiable exception that proves the general rule.

Big is bad, part 1: Kafka, Coase, and Brandeis walk into a bar … There’s a quip in a well-known textbook that Nobel laureate Ronald Coase said he’d grown weary of antitrust because when prices went up, the judges said it was monopoly; when the prices went down, they said it was predatory pricing; and when they stayed the same, they said it was tacit collusion. ICLE’s Geoffrey Manne and Gus Hurwitz worry that with the rise of the neo-Brandeisians, not much has changed since Coase’s time:

[C]ompetition, on its face, is virtually indistinguishable from anticompetitive behavior. Every firm strives to undercut its rivals, to put its rivals out of business, to increase its rivals’ costs, or to steal its rivals’ customers. The consumer welfare standard provides courts with a concrete mechanism for distinguishing between good and bad conduct, based not on the effect on rival firms but on the effect on consumers. Absent such a standard, any firm could potentially be deemed to violate the antitrust laws for any act it undertakes that could impede its competitors.

Big is bad, part 2. A working paper published by researchers from Denmark and the University of California at Berkeley suggest that companies such as Google, Apple, Facebook, and Nike are taking advantage of so-called “tax havens” to cause billions of dollars of income go “missing.” There’s a lot of mumbo jumbo in this one, but it’s getting lots of attention.

We show theoretically and empirically that in the current international tax system, tax authorities of high-tax countries do not have incentives to combat profit shifting to tax havens. They instead focus their enforcement effort on relocating profits booked in other high-tax places—in effect stealing revenue from each other.

Big is bad, part 3: Can any country survive with debt-to-GDP of more than 100 percent? Apparently, the answer is “yes.” The U.K. went 80 years, from 1779 to 1858. Then, it went 47 years from 1916 to 1962. Tim Harford has a fascinating story about an effort to clear the country’s debt in that second run.

In 1928, an anonymous donor resolved to clear the UK’s national debt and gave £500,000 with that end in mind. It was a tidy sum — almost £30m at today’s prices — but not nearly enough to pay off the debt. So it sat in trust, accumulating interest, for nearly a century.

How do you make a small fortune? Begin with a big one. A lesson from Johnny Depp.

Will we ever stop debating the Trolley Problem? Apparently the answer is “no.” Also, TIL there’s a field of research that relies on “notions.”

For so long, moral psychology has relied on the notion that you can extrapolate from people’s decisions in hypothetical thought experiments to infer something meaningful about how they would behave morally in the real world. These new findings challenge that core assumption of the field.

 

The week that was on Truth on the Market

LabMD.

[T]argets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

Google Android.

Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

AT&T-Time Warner. First this:

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

Then this:

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content.

 

 

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division), and Philip A. Giordano (Partner, Hughes Hubbard & Reed LLP).

[Kolasky & Giordano: The authors thank Katherine Taylor, an associate at Hughes Hubbard & Reed, for her help in researching this article.]

On January 10, the Department of Justice (DOJ) withdrew the 1984 DOJ Non-Horizontal Merger Guidelines, and, together with the Federal Trade Commission (FTC), released new draft 2020 Vertical Merger Guidelines (“DOJ/FTC draft guidelines”) on which it seeks public comment by February 26.[1] In announcing these new draft guidelines, Makan Delrahim, the Assistant Attorney General for the Antitrust Division, acknowledged that while many vertical mergers are competitively beneficial or neutral, “some vertical transactions can raise serious concern.” He went on to explain that, “The revised draft guidelines are based on new economic understandings and the agencies’ experience over the past several decades and better reflect the agencies’ actual practice in evaluating proposed vertical mergers.” He added that he hoped these new guidelines, once finalized, “will provide more clarity and transparency on how we review vertical transactions.”[2]

While we agree with the DOJ and FTC that the 1984 Non-Horizontal Merger Guidelines are now badly outdated and that a new set of vertical merger guidelines is needed, we question whether the draft guidelines released on January 10, will provide the desired “clarity and transparency.” In our view, the proposed guidelines give insufficient recognition to the wide range of efficiencies that flow from most, if not all, vertical mergers. In addition, the guidelines fail to provide sufficiently clear standards for challenging vertical mergers, thereby leaving too much discretion in the hands of the agencies as to when they will challenge a vertical merger and too much uncertainty for businesses contemplating a vertical merger. 

What is most troubling is that this did not need to be so. In 2008, the European Commission, as part of its merger process reform initiative, issued an excellent set of non-horizontal merger guidelines that adopt basically the same analytical framework as the new draft guidelines for evaluating vertical mergers.[3] The EU guidelines, however, lay out in much more detail the factors the Commission will consider and the standards it will apply in evaluating vertical transactions. That being so, it is difficult to understand why the DOJ and FTC did not propose a set of vertical merger guidelines that more closely mirror those of the European Commission, rather than try to reinvent the wheel with a much less complete set of guidelines.

Rather than making the same mistake ourselves, we will try to summarize the EU vertical mergers and to explain why we believe they are markedly better than the draft guidelines the DOJ and FTC have proposed. We would urge the DOJ and FTC to consider revising their draft guidelines to make them more consistent with the EU vertical merger guidelines. Doing so would, among other things, promote greater convergence between the two jurisdictions, which is very much in the interest of both businesses and consumers in an increasingly global economy.

The principal differences between the draft joint guidelines and the EU vertical merger guidelines

1. Acknowledgement of the key differences between horizontal and vertical mergers

The EU guidelines begin with an acknowledgement that, “Non-horizontal mergers are generally less likely to significantly impede effective competition than horizontal mergers.” As they explain, this is because of two key differences between vertical and horizontal mergers.

  • First, unlike horizontal mergers, vertical mergers “do not entail the loss of direct competition between the merging firms in the same relevant market.”[4] As a result, “the main source of anti-competitive effect in horizontal mergers is absent from vertical and conglomerate mergers.”[5]
  • Second, vertical mergers are more likely than horizontal mergers to provide substantial, merger-specific efficiencies, without any direct reduction in competition. The EU guidelines explain that these efficiencies stem from two main sources, both of which are intrinsic to vertical mergers. The first is that, “Vertical integration may thus provide an increased incentive to seek to decrease prices and increase output because the integrated firm can capture a larger fraction of the benefits.”[6] The second is that, “Integration may also decrease transaction costs and allow for a better co-ordination in terms of product design, the organization of the production process, and the way in which the products are sold.”[7]

The DOJ/FTC draft guidelines do not acknowledge these fundamental differences between horizontal and vertical mergers. The 1984 DOJ non-horizontal guidelines, by contrast, contained an acknowledgement of these differences very similar to that found in the EU guidelines. First, the 1984 guidelines acknowledge that, “By definition, non-horizontal mergers involve firms that do not operate in the same market. It necessarily follows that such mergers produce no immediate change in the level of concentration in any relevant market as defined in Section 2 of these Guidelines.”[8] Second, the 1984 guidelines acknowledge that, “An extensive pattern of vertical integration may constitute evidence that substantial economies are afforded by vertical integration. Therefore, the Department will give relatively more weight to expected efficiencies in determining whether to challenge a vertical merger than in determining whether to challenge a horizontal merger.”[9] Neither of these acknowledgements can be found in the new draft guidelines.

These key differences have also been acknowledged by the courts of appeals for both the Second and D.C. circuits in the agencies’ two most recent litigated vertical mergers challenges: Fruehauf Corp. v. FTC in 1979[10] and United States v. AT&T in 2019.[11] In both cases, the courts held, as the D.C. Circuit explained in AT&T, that because of these differences, the government “cannot use a short cut to establish a presumption of anticompetitive effect through statistics about the change in market concentration” – as it can in a horizontal merger case – “because vertical mergers produce no immediate change in the relevant market share.”[12] Instead, in challenging a vertical merger, “the government must make a ‘fact-specific’ showing that the proposed merger is ‘likely to be anticompetitive’” before the burden shifts to the defendants “to present evidence that the prima facie case ‘inaccurately predicts the relevant transaction’s probable effect on future competition,’ or to ‘sufficiently discredit’ the evidence underlying the prima facie case.”[13]

While the DOJ/FTC draft guidelines acknowledge that a vertical merger may generate efficiencies, they propose that the parties to the merger bear the burden of identifying and substantiating those efficiencies under the same standards applied by the 2010 Horizontal Merger Guidelines. Meeting those standards in the case of a horizontal merger can be very difficult. For that reason, it is important that the DOJ/FTC draft guidelines be revised to make it clear that before the parties to a vertical merger are required to establish efficiencies meeting the horizontal merger guidelines’ evidentiary standard, the agencies must first show that the merger is likely to substantially lessen competition, based on the type of fact-specific evidence the courts required in both Fruehauf and AT&T.

2. Safe harbors

Although they do not refer to it as a “safe harbor,” the DOJ/FTC draft guidelines state that, 

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.[14] 

If we understand this statement correctly, it means that the agencies may challenge a vertical merger in any case where one party has a 20% share in a relevant market and the other party has a 20% or higher share of any “related product,” i.e., any “product or service” that is supplied by the other party to firms in that relevant market. 

By contrast, the EU guidelines state that,

The Commission is unlikely to find concern in non-horizontal mergers . . . where the market share post-merger of the new entity in each of the markets concerned is below 30% . . . and the post-merger HHI is below 2,000.[15] 

Both the EU guidelines and the DOJ/FTC draft guidelines are careful to explain that these statements do not create any “legal presumption” that vertical mergers below these thresholds will not be challenged or that vertical mergers above those thresholds are likely to be challenged.

The EU guidelines are more consistent than the DOJ/FTC draft guidelines both with U.S. case law and with the actual practice of both the DOJ and FTC. It is important to remember that the raising rivals’ costs theory of vertical foreclosure was first developed nearly four decades ago by two young economists, David Scheffman and Steve Salop, as a theory of exclusionary conduct that could be used against dominant firms in place of the more simplistic theories of vertical foreclosure that the courts had previously relied on and which by 1979 had been totally discredited by the Chicago School for the reasons stated by the Second Circuit in Fruehauf.[16] 

As the Second Circuit explained in Fruehauf, it was “unwilling to assume that any vertical foreclosure lessens competition” because 

[a]bsent very high market concentration or some other factor threatening a tangible anticompetitive effect, a vertical merger may simply realign sales patterns, for insofar as the merger forecloses some of the market from the merging firms’ competitors, it may simply free up that much of the market, in which the merging firm’s competitors and the merged firm formerly transacted, for new transactions between the merged firm’s competitors and the merging firm’s competitors.[17] 

Or, as Robert Bork put it more colorfully in The Antitrust Paradox, in criticizing the FTC’s decision in A.G. Spalding & Bros., Inc.,[18]:

We are left to imagine eager suppliers and hungry customers, unable to find each other, forever foreclosed and left languishing. It would appear the commission could have cured this aspect of the situation by throwing an industry social mixer.[19]

Since David Scheffman and Steve Salop first began developing their raising rivals’ cost theory of exclusionary conduct in the early 1980s, gallons of ink have been spilled in legal and economic journals discussing and evaluating that theory.[20] The general consensus of those articles is that while raising rivals’ cost is a plausible theory of exclusionary conduct, proving that a defendant has engaged in such conduct is very difficult in practice. It is even more difficult to predict whether, in evaluating a proposed merger, the merged firm is likely to engage in such conduct at some time in the future. 

Consistent with the Second Circuit’s decision in Fruehauf and with this academic literature, the courts, in deciding cases challenging exclusive dealing arrangements under either a vertical foreclosure theory or a raising rivals’ cost theory, have generally been willing to consider a defendant’s claim that the alleged exclusive dealing arrangements violated section 1 of the Sherman Act only in cases where the defendant had a dominant or near-dominant share of a highly concentrated market — usually meaning a share of 40 percent or more.[21] Likewise, all but one of the vertical mergers challenged by either the FTC or DOJ since 1996 have involved parties that had dominant or near-dominant shares of a highly concentrated market.[22] A majority of these involved mergers that were not purely vertical, but in which there was also a direct horizontal overlap between the two parties.

One of the few exceptions is AT&T/Time Warner, a challenge the DOJ lost in both the district court and the D.C. Circuit.[23] The outcome of that case illustrates the difficulty the agencies face in trying to prove a raising rivals’ cost theory of vertical foreclosure where the merging firms do not have a dominant or near-dominant share in either of the affected markets.

Given these court decisions and the agencies’ historical practice of challenging vertical mergers only between companies with dominant or near-dominant shares in highly concentrated markets, we would urge the DOJ and FTC to consider raising the market share threshold below which it is unlikely to challenge a vertical merger to at least 30 percent, in keeping with the EU guidelines, or to 40 percent in order to make the vertical merger guidelines more consistent with the U.S. case law on exclusive dealing.[24] We would also urge the agencies to consider adding a market concentration HHI threshold of 2,000 or higher, again in keeping with the EU guidelines.

3. Standards for applying a raising rivals’ cost theory of vertical foreclosure

Another way in which the EU guidelines are markedly better than the DOJ/FTC draft guidelines is in explaining the factors taken into consideration in evaluating whether a vertical merger will give the parties both the ability and incentive to raise their rivals’ costs in a way that will enable the merged entity to increase prices to consumers. Most importantly, the EU guidelines distinguish clearly between input foreclosure and customer foreclosure, and devote an entire section to each. For brevity, we will focus only on input foreclosure to show why we believe the more detailed approach the EU guidelines take is preferable to the more cursory discussion in the DOJ/FTC draft guidelines.

In discussing input foreclosure, the EU guidelines correctly distinguish between whether a vertical merger will give the merged firm the ability to raise rivals’ costs in a way that may substantially lessen competition and, if so, whether it will give the merged firm an incentive to do so. These are two quite distinct questions, which the DOJ/FTC draft guidelines unfortunately seem to lump together.

The ability to raise rivals’ costs

The EU guidelines identify four important conditions that must exist for a vertical merger to give the merged firm the ability to raise its rivals’ costs. First, the alleged foreclosure must concern an important input for the downstream product, such as one that represents a significant cost factor relative to the price of the downstream product. Second, the merged entity must have a significant degree of market power in the upstream market. Third, the merged entity must be able, by reducing access to its own upstream products or services, to affect negatively the overall availability of inputs for rivals in the downstream market in terms of price or quality. Fourth, the agency must examine the degree to which the merger may free up capacity of other potential input suppliers. If that capacity becomes available to downstream competitors, the merger may simple realign purchase patterns among competing firms, as the Second Circuit recognized in Fruehauf.

The incentive to foreclose access to inputs: 

The EU guidelines recognize that the incentive to foreclose depends on the degree to which foreclosure would be profitable. In making this determination, the vertically integrated firm will take into account how its supplies of inputs to competitors downstream will affect not only the profits of its upstream division, but also of its downstream division. Essentially, the merged entity faces a trade-off between the profit lost in the upstream market due to a reduction of input sales to (actual or potential) rivals and the profit gained from expanding sales downstream or, as the case may be, raising prices to consumers. This trade-off is likely to depend on the margins the merged entity obtains on upstream and downstream sales. Other things constant, the lower the margins upstream, the lower the loss from restricting input sales. Similarly, the higher the downstream margins, the higher the profit gain from increasing market share downstream at the expense of foreclosed rivals.

The EU guidelines recognize that the incentive for the integrated firm to raise rivals’ costs further depends on the extent to which downstream demand is likely to be diverted away from foreclosed rivals and the share of that diverted demand the downstream division of the integrated firm can capture. This share will normally be higher the less capacity constrained the merged entity will be relative to non-foreclosed downstream rivals and the more the products of the merged entity and foreclosed competitors are close substitutes. The effect on downstream demand will also be higher if the affected input represents a significant proportion of downstream rivals’ costs or if it otherwise represents a critical component of the downstream product.

The EU guidelines recognize that the incentive to foreclose actual or potential rivals may also depend on the extent to which the downstream division of the integrated firm can be expected to benefit from higher price levels downstream as a result of a strategy to raise rivals’ costs. The greater the market shares of the merged entity downstream, the greater the base of sales on which to enjoy increased margins. However, an upstream monopolist that is already able to fully extract all available profits in vertically related markets may not have any incentive to foreclose rivals following a vertical merger. Therefore, the ability to extract available profits from consumers does not follow immediately from a very high market share; to come to that conclusion requires a more thorough analysis of the actual and future constraints under which the monopolist operates.

Finally, the EU guidelines require the Commission to examine not only the incentives to adopt such conduct, but also the factors liable to reduce, or even eliminate, those incentives, including the possibility that the conduct is unlawful. In this regard, the Commission will consider, on the basis of a summary analysis: (i) the likelihood that this conduct would be clearly be unlawful under Community law, (ii) the likelihood that this illegal conduct could be detected, and (iii) the penalties that could be imposed.

Overall likely impact on effective competition: 

Finally, the EU guidelines recognize that a vertical merger will raise foreclosure concerns only when it would lead to increased prices in the downstream market. This normally requires that the foreclosed suppliers play a sufficiently important role in the competitive process in the downstream market. In general, the higher the proportion of rivals that would be foreclosed in the downstream market, the more likely the merger can be expected to result in a significant price increase in the downstream market and, therefore, to significantly impede effective competition. 

In making these determinations, the Commission must under the EU guidelines also assess the extent to which a vertical merger may raise barriers to entry, a criterion that is also found in the 1984 DOJ non-horizontal merger guidelines but is strangely missing from the DOJ/FTC draft guidelines. As the 1984 guidelines recognize, a vertical merger can raise entry barriers if the anticipated input foreclosure would create a need to enter at both the downstream and the upstream level in order to compete effectively in either market.

* * * * *

Rather than issue a set of incomplete vertical merger guidelines, we would urge the DOJ and FTC to follow the lead of the European Commission and develop a set of guidelines setting out in more detail the factors the agencies will consider and the standards they will use in evaluating vertical mergers. The EU non-horizontal merger guidelines provide an excellent model for doing so.


[1] U.S. Department of Justice & Federal Trade Commission, Draft Vertical Merger Guidelines, available at https://www.justice.gov/opa/press-release/file/1233741/download (hereinafter cited as “DOJ/FTC draft guidelines”).

[2] U.S. Department of Justice, Office of Public Affairs, “DOJ and FTC Announce Draft Vertical Merger Guidelines for Public Comment,” Jan. 10, 2020, available at https://www.justice.gov/opa/pr/doj-and-ftc-announce-draft-vertical-merger-guidelines-public-comment.

[3] See European Commission, Guidelines on the assessment of non-horizontal mergers under the Council Regulation on the control of concentrations between undertakings (2008) (hereinafter cited as “EU guidelines”), available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52008XC1018(03)&from=EN.

[4] Id. at § 12.

[5] Id.

[6] Id. at § 13.

[7] Id. at § 14. The insight that transactions costs are an explanation for both horizontal and vertical integration in firms first occurred to Ronald Coase in 1932, while he was a student at the London School of Economics. See Ronald H. Coase, Essays on Economics and Economists 7 (1994). Coase took five years to flesh out his initial insight, which he then published in 1937 in a now-famous article, The Nature of the Firm. See Ronald H. Coase, The Nature of the Firm, Economica 4 (1937). The implications of transactions costs for antitrust analysis were explained in more detail four decades later by Oliver Williamson in a book he published in 1975. See Oliver E. William, Markets and Hierarchies: Analysis and Antitrust Implications (1975) (explaining how vertical integration, either by ownership or contract, can, for example, protect a firm from free riding and other opportunistic behavior by its suppliers and customers). Both Coase and Williamson later received Nobel Prizes for Economics for their work recognizing the importance of transactions costs, not only in explaining the structure of firms, but in other areas of the economy as well. See, e.g., Ronald H. Coase, The Problem of Social Cost, J. Law & Econ. 3 (1960) (using transactions costs to explain the need for governmental action to force entities to internalize the costs their conduct imposes on others).

[8] U.S. Department of Justice, Antitrust Division, 1984 Merger Guidelines, § 4, available at https://www.justice.gov/archives/atr/1984-merger-guidelines.

[9] EU guidelines, at § 4.24.

[10] Fruehauf Corp. v. FTC, 603 F.2d 345 (2d Cir. 1979).

[11] United States v. AT&T, Inc., 916 F.2d 1029 (D.C. Cir. 2019).

[12] Id. at 1032; accord, Fruehauf, 603 F.2d, at 351 (“A vertical merger, unlike a horizontal one, does not eliminate a competing buyer or seller from the market . . . . It does not, therefore, automatically have an anticompetitive effect.”) (emphasis in original) (internal citations omitted).

[13] AT&T, 419 F.2d, at 1032 (internal citations omitted).

[14] DOJ/FTC draft guidelines, at 3.

[15] EU guidelines, at § 25.

[16] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73 AM. ECON. REV. 267 (1983).

[17] Fruehauf, supra note11, 603 F.2d at 353 n.9 (emphasis added).

[18] 56 F.T.C. 1125 (1960).

[19] Robert H. Bork, The Antitrust Paradox: A Policy at War with Itself 232 (1978).

[20] See, e.g., Alan J. Meese, Exclusive Dealing, the Theory of the Firm, and Raising Rivals’ Costs: Toward a New Synthesis, 50 Antitrust Bull., 371 (2005); David T. Scheffman and Richard S. Higgins, Twenty Years of Raising Rivals Costs: History, Assessment, and Future, 12 George Mason L. Rev.371 (2003); David Reiffen & Michael Vita, Comment: Is There New Thinking on Vertical Mergers, 63 Antitrust L.J. 917 (1995); Thomas G. Krattenmaker & Steven Salop, Anticompetitive Exclusion: Raising Rivals’ Costs to Achieve Power Over Price, 96 Yale L. J. 209, 219-25 (1986).

[21] See, e.g., United States v. Microsoft, 87 F. Supp. 2d 30, 50-53 (D.D.C. 1999) (summarizing law on exclusive dealing under section 1 of the Sherman Act); id. at 52 (concluding that modern case law requires finding that exclusive dealing contracts foreclose rivals from 40% of the marketplace); Omega Envtl, Inc. v. Gilbarco, Inc., 127 F.3d 1157, 1162-63 (9th Cir. 1997) (finding 38% foreclosure insufficient to make out prima facie case that exclusive dealing agreement violated the Sherman and Clayton Acts, at least where there appeared to be alternate channels of distribution).

[22] See, e.g., United States, et al. v. Comcast, 1:11-cv-00106 (D.D.C. Jan. 18, 2011) (Comcast had over 50% of MVPD market), available at https://www.justice.gov/atr/case-document/competitive-impact-statement-72; United States v. Premdor, Civil No.: 1-01696 (GK) (D.D.C. Aug. 3, 2002) (Masonite manufactured more than 50% of all doorskins sold in the U.S.; Premdor sold 40% of all molded doors made in the U.S.), available at https://www.justice.gov/atr/case-document/final-judgment-151.

[23] See United States v. AT&T, Inc., 916 F.2d 1029 (D.C. Cir. 2019).

[24] See Brown Shoe Co. v. United States, 370 U.S. 294, (1962) (relying on earlier Supreme Court decisions involving exclusive dealing and tying claims under section 3 of the Clayton Act for guidance as to what share of a market must be foreclosed before a vertical merger can be found unlawful under section 7).

Zoom, one of Silicon Valley’s lesser-known unicorns, has just gone public. At the time of writing, its shares are trading at about $65.70, placing the company’s value at $16.84 billion. There are good reasons for this success. According to its Form S-1, Zoom’s revenue rose from about $60 million in 2017 to a projected $330 million in 2019, and the company has already surpassed break-even . This growth was notably fueled by a thriving community of users who collectively spend approximately 5 billion minutes per month in Zoom meetings.

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects. For instance, the value of Skype to one user depends – at least to some extent – on the number of other people that might be willing to use the network. In these settings, it is often said that positive feedback loops may cause the market to tip in favor of a single firm that is then left with an unassailable market position. Although Zoom still faces significant competitive challenges, it has nonetheless established a strong position in a market previously dominated by powerful incumbents who could theoretically count on network effects to stymie its growth.

Further complicating matters, Zoom chose to compete head-on with these incumbents. It did not create a new market or a highly differentiated product. Zoom’s Form S-1 is quite revealing. The company cites the quality of its product as its most important competitive strength. Similarly, when listing the main benefits of its platform, Zoom emphasizes that its software is “easy to use”, “easy to deploy and manage”, “reliable”, etc. In its own words, Zoom has thus gained a foothold by offering an existing service that works better than that of its competitors.

And yet, this is precisely the type of story that a literal reading of the network effects literature would suggest is impossible, or at least highly unlikely. For instance, the foundational papers on network effects often cite the example of the DVORAK keyboard (David, 1985; and Farrell & Saloner, 1985). These early scholars argued that, despite it being the superior standard, the DVORAK layout failed to gain traction because of the network effects protecting the QWERTY standard. In other words, consumers failed to adopt the superior DVORAK layout because they were unable to coordinate on their preferred option. It must be noted, however, that the conventional telling of this story was forcefully criticized by Liebowitz & Margolis in their classic 1995 article, The Fable of the Keys.

Despite Liebowitz & Margolis’ critique, the dominance of the underlying network effects story persists in many respects. And in that respect, the emergence of Zoom is something of a cautionary tale. As influential as it may be, the network effects literature has tended to overlook a number of factors that may mitigate, or even eliminate, the likelihood of problematic outcomes. Zoom is yet another illustration that policymakers should be careful when they make normative inferences from positive economics.

A Coasian perspective

It is now widely accepted that multi-homing and the absence of switching costs can significantly curtail the potentially undesirable outcomes that are sometimes associated with network effects. But other possibilities are often overlooked. For instance, almost none of the foundational network effects papers pay any notice to the application of the Coase theorem (though it has been well-recognized in the two-sided markets literature).

Take a purported market failure that is commonly associated with network effects: an installed base of users prevents the market from switching towards a new standard, even if it is superior (this is broadly referred to as “excess inertia,” while the opposite scenario is referred to as “excess momentum”). DVORAK’s failure is often cited as an example.

Astute readers will quickly recognize that this externality problem is not fundamentally different from those discussed in Ronald Coase’s masterpiece, “The Problem of Social Cost,” or Steven Cheung’s “The Fable of the Bees” (to which Liebowitz & Margolis paid homage in their article’s title). In the case at hand, there are at least two sets of externalities at play. First, early adopters of the new technology impose a negative externality on the old network’s installed base (by reducing its network effects), and a positive externality on other early adopters (by growing the new network). Conversely, installed base users impose a negative externality on early adopters and a positive externality on other remaining users.

Describing these situations (with a haughty confidence reminiscent of Paul Samuelson and Arthur Cecil Pigou), Joseph Farrell and Garth Saloner conclude that:

In general, he or she [i.e. the user exerting these externalities] does not appropriately take this into account.

Similarly, Michael Katz and Carl Shapiro assert that:

In terms of the Coase theorem, it is very difficult to design a contract where, say, the (potential) future users of HDTV agree to subsidize today’s buyers of television sets to stop buying NTSC sets and start buying HDTV sets, thereby stimulating the supply of HDTV programming.

And yet it is far from clear that consumers and firms can never come up with solutions that mitigate these problems. As Daniel Spulber has suggested, referral programs offer a case in point. These programs usually allow early adopters to receive rewards in exchange for bringing new users to a network. One salient feature of these programs is that they do not simply charge a lower price to early adopters; instead, in order to obtain a referral fee, there must be some agreement between the early adopter and the user who is referred to the platform. This leaves ample room for the reallocation of rewards. Users might, for instance, choose to split the referral fee. Alternatively, the early adopter might invest time to familiarize the switching user with the new platform, hoping to earn money when the user jumps ship. Both of these arrangements may reduce switching costs and mitigate externalities.

Danial Spulber also argues that users may coordinate spontaneously. For instance, social groups often decide upon the medium they will use to communicate. Families might choose to stay on the same mobile phone network. And larger groups (such as an incoming class of students) may agree upon a social network to share necessary information, etc. In these contexts, there is at least some room to pressure peers into adopting a new platform.

Finally, firms and other forms of governance may also play a significant role. For instance, employees are routinely required to use a series of networked goods. Common examples include office suites, email clients, social media platforms (such as Slack), or video communications applications (Zoom, Skype, Google Hangouts, etc.). In doing so, firms presumably act as islands of top-down decision-making and impose those products that maximize the collective preferences of employers and employees. Similarly, a single firm choosing to join a network (notably by adopting a standard) may generate enough momentum for a network to gain critical mass. Apple’s decisions to adopt USB-C connectors on its laptops and to ditch headphone jacks on its iPhones both spring to mind. Likewise, it has been suggested that distributed ledger technology and initial coin offerings may facilitate the creation of new networks. The intuition is that so-called “utility tokens” may incentivize early adopters to join a platform, despite initially weak network effects, because they expect these tokens to increase in value as the network expands.

A combination of these arrangements might explain how Zoom managed to grow so rapidly, despite the presence of powerful incumbents. In its own words:

Our rapid adoption is driven by a virtuous cycle of positive user experiences. Individuals typically begin using our platform when a colleague or associate invites them to a Zoom meeting. When attendees experience our platform and realize the benefits, they often become paying customers to unlock additional functionality.

All of this is not to say that network effects will always be internalized through private arrangements, but rather that it is equally wrong to assume that transaction costs systematically prevent efficient coordination among users.

Misguided regulatory responses

Over the past couple of months, several antitrust authorities around the globe have released reports concerning competition in digital markets (UK, EU, Australia), or held hearings on this topic (US). A recurring theme throughout their published reports is that network effects almost inevitably weaken competition in digital markets.

For instance, the report commissioned by the European Commission mentions that:

Because of very strong network externalities (especially in multi-sided platforms), incumbency advantage is important and strict scrutiny is appropriate. We believe that any practice aimed at protecting the investment of a dominant platform should be minimal and well targeted.

The Australian Competition & Consumer Commission concludes that:

There are considerable barriers to entry and expansion for search platforms and social media platforms that reinforce and entrench Google and Facebook’s market power. These include barriers arising from same-side and cross-side network effects, branding, consumer inertia and switching costs, economies of scale and sunk costs.

Finally, a panel of experts in the United Kingdom found that:

Today, network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web.

To address these issues, these reports suggest far-reaching policy changes. These include shifting the burden of proof in competition cases from authorities to defendants, establishing specialized units to oversee digital markets, and imposing special obligations upon digital platforms.

The story of Zoom’s emergence and the important insights that can be derived from the Coase theorem both suggest that these fears may be somewhat overblown.

Rivals do indeed find ways to overthrow entrenched incumbents with some regularity, even when these incumbents are shielded by network effects. Of course, critics may retort that this is not enough, that competition may sometimes arrive too late (excess inertia, i.e., “ a socially excessive reluctance to switch to a superior new standard”) or too fast (excess momentum, i.e., “the inefficient adoption of a new technology”), and that the problem is not just one of network effects, but also one of economies of scale, information asymmetry, etc. But this comes dangerously close to the Nirvana fallacy. To begin, it assumes that regulators are able to reliably navigate markets toward these optimal outcomes — which is questionable, at best. Moreover, the regulatory cost of imposing perfect competition in every digital market (even if it were possible) may well outweigh the benefits that this achieves. Mandating far-reaching policy changes in order to address sporadic and heterogeneous problems is thus unlikely to be the best solution.

Instead, the optimal policy notably depends on whether, in a given case, users and firms can coordinate their decisions without intervention in order to avoid problematic outcomes. A case-by-case approach thus seems by far the best solution.

And competition authorities need look no further than their own decisional practice. The European Commission’s decision in the Facebook/Whatsapp merger offers a good example (this was before Margrethe Vestager’s appointment at DG Competition). In its decision, the Commission concluded that the fast-moving nature of the social network industry, widespread multi-homing, and the fact that neither Facebook nor Whatsapp controlled any essential infrastructure, prevented network effects from acting as a barrier to entry. Regardless of its ultimate position, this seems like a vastly superior approach to competition issues in digital markets. The Commission adopted a similar reasoning in the Microsoft/Skype merger. Unfortunately, the Commission seems to have departed from this measured attitude in more recent decisions. In the Google Search case, for example, the Commission assumes that the mere existence of network effects necessarily increases barriers to entry:

The existence of positive feedback effects on both sides of the two-sided platform formed by general search services and online search advertising creates an additional barrier to entry.

A better way forward

Although the positive economics of network effects are generally correct and most definitely useful, some of the normative implications that have been derived from them are deeply flawed. Too often, policymakers and commentators conclude that these potential externalities inevitably lead to stagnant markets where competition is unable to flourish. But this does not have to be the case. The emergence of Zoom shows that superior products may prosper despite the presence of strong incumbents and network effects.

Basing antitrust policies on sweeping presumptions about digital competition – such as the idea that network effects are rampant or the suggestion that online platforms necessarily imply “extreme returns to scale” – is thus likely to do more harm than good. Instead, Antitrust authorities should take a leaf out of Ronald Coase’s book, and avoid blackboard economics in favor of a more granular approach.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics ); and Kristian Stout (Associate Director, ICLE).]

As many in the symposium have noted — and as was repeatedly noted during the FTC’s Hearings on Competition and Consumer Protection in the 21st Century — there is widespread dissatisfaction with the 1984 Non-Horizontal Merger Guidelines

Although it is doubtless correct that the 1984 guidelines don’t reflect the latest economic knowledge, it is by no means clear that this has actually been a problem — or that a new set of guidelines wouldn’t create even greater problems. Indeed, as others have noted in this symposium, there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm

We can do little better in expressing our reservations that new guidelines are needed than did the current Chairman of the FTC, Joe Simons, writing on this very blog in a symposium on what became the 2010 Horizontal Merger Guidelines. In a post entitled, Revisions to the Merger Guidelines: Above All, Do No Harm, Simons writes:

My sense is that there is no need to revise the DOJ/FTC Horizontal Merger Guidelines, with one exception…. The current guidelines lay out the general framework quite well and any change in language relative to that framework are likely to create more confusion rather than less. Based on my own experience, the business community has had a good sense of how the agencies conduct merger analysis…. If, however, the current administration intends to materially change the way merger analysis is conducted at the agencies, then perhaps greater revision makes more sense. But even then, perhaps the best approach is to try out some of the contemplated changes (i.e. in actual investigations) and publicize them in speeches and the like before memorializing them in a document that is likely to have some substantial permanence to it.

Wise words. Unless, of course, “the current [FTC] intends to materially change the way [vertical] merger analysis is conducted.” But the draft guidelines don’t really appear to portend a substantial change, and in several ways they pretty accurately reflect agency practice.

What we want to draw attention to, however, is an implicit underpinning of the draft guidelines that we believe the agencies should clearly disavow (or at least explain more clearly the complexity surrounding): the extent and implications of the presumed functional equivalence of vertical integration by contract and by merger — the contract/merger equivalency assumption.   

Vertical mergers and their discontents

The contract/merger equivalency assumption has been gaining traction with antitrust scholars, but it is perhaps most clearly represented in some of Steve Salop’s work. Salop generally believes that vertical merger enforcement should be heightened. Among his criticisms of current enforcement is his contention that efficiencies that can be realized by merger can often also be achieved by contract. As he discussed during his keynote presentation at last year’s FTC hearing on vertical mergers:

And, finally, the key policy issue is the issue is not about whether or not there are efficiencies; the issue is whether the efficiencies are merger-specific. As I pointed out before, Coase stressed that you can get vertical integration by contract. Very often, you can achieve the vertical efficiencies if they occur, but with contracts rather than having to merge.

And later, in the discussion following his talk:

If there is vertical integration by contract… it meant you could get all the efficiencies from vertical integration with a contract. You did not actually need the vertical integration. 

Salop thus argues that because the existence of a “contract solution” to firm problems can often generate the same sorts of efficiencies as when firms opt to merge, enforcers and courts should generally adopt a presumption against vertical mergers relative to contracting:

Coase’s door swings both ways: Efficiencies often can be achieved by vertical contracts, without the potential anticompetitive harms from merger

In that vertical restraints are characterized as “just” vertical integration “by contract,” then claimed efficiencies in problematical mergers might be achieved with non-merger contracts that do not raise the same anticompetitive concerns. (emphasis in original)

(Salop isn’t alone in drawing such a conclusion, of course; Carl Shapiro, for example, has made a similar point (as have others)).

In our next post we explore the policy errors implicated by this contract/merger equivalency assumption. But here we want to consider whether it makes logical sense in the first place

The logic of vertical integration is not commutative 

It is true that, where contracts are observed, they are likely as (or more, actually)  efficient than merger. But, by the same token, it is also true that where mergers are observed they are likely more efficient than contracts. Indeed, the entire reason for integration is efficiency relative to what could be done by contract — this is the essence of the so-called “make-or-buy” decision. 

For example, a firm that decides to buy its own warehouse has determined that doing so is more efficient than renting warehouse space. Some of these efficiencies can be measured and quantified (e.g., carrying costs of ownership vs. the cost of rent), but many efficiencies cannot be easily measured or quantified (e.g., layout of the facility or site security). Under the contract/merger equivalency assumption, the benefits of owning a warehouse can be achieved “very often” by renting warehouse space. But the fact that many firms using warehouses own some space and rent some space indicates that the make-or-buy decision is often unique to each firm’s idiosyncratic situation. Moreover, the distinctions driving those differences will not always be readily apparent, and whether contracting or integrating is preferable in any given situation may not be inferred from the existence of one or the other elsewhere in the market — or even in the same firm!

There is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. The two are, quite simply, different bargaining environments, each with a different risk and cost allocation; accounting treatment; effect on employees, customers, and investors; tax consequence, etc. Even if the parties accomplished nominally “identical” outcomes, they would not, in fact, be identical.

Meanwhile, what if the reason for failure to contract, or the reason to prefer merger, has nothing to do with efficiency? What if there were no anticompetitive aim but there were a tax advantage? What if one of the parties just wanted a larger firm in order to satisfy the CEO’s ego? That these are not cognizable efficiencies under antitrust law is clear. But the adoption of a presumption of equivalence between contract and merger would — ironically — entail their incorporation into antitrust law just the same — by virtue of their effective prohibition under antitrust law

In other words, if the assumption is that contract and merger are equally efficient unless proven otherwise, but the law adopts a suspicion (or, even worse, a presumption) that vertical mergers are anticompetitive which can be rebutted only with highly burdensome evidence of net efficiency gain, this effectively deputizes antitrust law to enforce a preconceived notion of “merger appropriateness” that does not necessarily turn on efficiencies. There may (or may not) be sensible policy reasons for adopting such a stance, but they aren’t antitrust reasons.

More fundamentally, however, while there are surely some situations in which contractual restraints might be able to achieve similar organizational and efficiency gains as a merger, the practical realities of achieving not just greater efficiency, but a whole host of non-efficiency-related, yet nonetheless valid, goals, are rarely equivalent between the two

It may be that the parties don’t know what they don’t know to such an extent that a contract would be too costly because it would be too incomplete, for example. But incomplete contracts and ambiguous control and ownership rights aren’t (as much of) an issue on an ongoing basis after a merger. 

As noted, there is no basis for assuming that the structure of a merger and a contract would be identical. In the same way, there is no basis for assuming that the knowledge transfer that would result from a merger would be the same as that which would result from a contract — and in ways that the parties could even specify or reliably calculate in advance. Knowing that the prospect for knowledge “synergies” would be higher with a merger than a contract might be sufficient to induce the merger outcome. But asked to provide evidence that the parties could not engage in the same conduct via contract, the parties would be unable to do so. The consequence, then, would be the loss of potential gains from closer integration.

At the same time, the cavalier assumption that parties would be able — legally — to enter into an analogous contract in lieu of a merger is problematic, given that it would likely be precisely the form of contract (foreclosing downstream or upstream access) that is alleged to create problems with the merger in the first place.

At the FTC hearings last year, Francine LaFontaine highlighted this exact concern

I want to reemphasize that there are also rules against vertical restraints in antitrust laws, and so to say that the firms could achieve the mergers outcome by using vertical restraints is kind of putting them in a circular motion where we are telling them you cannot merge because you could do it by contract, and then we say, but these contract terms are not acceptable.

Indeed, legal risk is one of the reasons why a merger might be preferable to a contract, and because the relevant markets here are oligopoly markets, the possibility of impermissible vertical restraints between large firms with significant market share is quite real.

More important, the assumptions underlying the contention that contracts and mergers are functionally equivalent legal devices fails to appreciate the importance of varied institutional environments. Consider that one reason some takeovers are hostile is because incumbent managers don’t want to merge, and often believe that they are running a company as well as it can be run — that a change of corporate control would not improve efficiency. The same presumptions may also underlie refusals to contract and, even more likely, may explain why, to the other firm, a contract would be ineffective.

But, while there is no way to contract without bilateral agreement, there is a corporate control mechanism to force a takeover. In this institutional environment a merger may be easier to realize than a contract (and that applies even to a consensual merger, of course, given the hostile outside option). In this case, again, the assumption that contract should be the relevant baseline and the preferred mechanism for coordination is misplaced — even if other firms in the industry are successfully accomplishing the same thing via contract, and even if a contract would be more “efficient” in the abstract.

Conclusion

Properly understood, the choice of whether to contract or merge derives from a host of complicated factors, many of which are difficult to observe and/or quantify. The contract/merger equivalency assumption — and the species of “least-restrictive alternative” reasoning that would demand onerous efficiency arguments to permit a merger when a contract was notionally possible — too readily glosses over these complications and unjustifiably embraces a relative hostility to vertical mergers at odds with both theory and evidence

Rather, as has long been broadly recognized, there can be no legally relevant presumption drawn against a company when it chooses one method of vertical integration over another in the general case. The agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

Writing in the New York Times, journalist E. Tammy Kim recently called for Seattle and other pricey, high-tech hubs to impose a special tax on Microsoft and other large employers of high-paid workers. Efficiency demands such a tax, she says, because those companies are imposing a negative externality: By driving up demand for housing, they are causing rents and home prices to rise, which adversely affects city residents.

Arguing that her proposal is “akin to a pollution tax,” Ms. Kim writes:

A half-century ago, it seemed inconceivable that factories, smelters or power plants should have to account for the toxins they released into the air.  But we have since accepted the idea that businesses should have to pay the public for the negative externalities they cause.

It is true that negative externalities—costs imposed on people who are “external” to the process creating those costs (as when a factory belches rancid smoke on its neighbors)—are often taxed. One justification for such a tax is fairness: It seems inequitable that one party would impose costs on another; justice may demand that the victimizer pay. The justification cited by the economist who first proposed such taxes, though, was something different. In his 1920 opus, The Economics of Welfare, British economist A.C. Pigou proposed taxing behavior involving negative externalities in order to achieve efficiency—an increase in overall social welfare.   

With respect to the proposed tax on Microsoft and other high-tech employers, the fairness argument seems a stretch, and the efficiency argument outright fails. Let’s consider each.

To achieve fairness by forcing a victimizer to pay for imposing costs on a victim, one must determine who is the victimizer. Ms. Kim’s view is that Microsoft and its high-paid employees are victimizing (imposing costs on) incumbent renters and lower-paid homebuyers. But is that so clear?

Microsoft’s desire to employ high-skilled workers, and those employees’ desire to live near their work, conflicts with incumbent renters’ desire for low rent and lower paid homebuyers’ desire for cheaper home prices. If Microsoft got its way, incumbent renters and lower paid homebuyers would be worse off.

But incumbent renters’ and lower-paid homebuyers’ insistence on low rents and home prices conflicts with the desires of Microsoft, the high-skilled workers it would like to hire, and local homeowners. If incumbent renters and lower paid homebuyers got their way and prevented Microsoft from employing high-wage workers, Microsoft, its potential employees, and local homeowners would be worse off. Who is the victim here?

As Nobel laureate Ronald Coase famously observed, in most cases involving negative externalities, there is a reciprocal harm: Each party is a victim of the other party’s demands and a victimizer with respect to its own. When both parties are victimizing each other, it’s hard to “do justice” by taxing “the” victimizer.

A desire to achieve efficiency provides a sounder basis for many so-called Pigouvian taxes. With respect to Ms. Kim’s proposed tax, however, the efficiency justification fails. To see why that is so, first consider how it is that Pigouvian taxes may enhance social welfare.

When a business engages in some productive activity, it uses resources (labor, materials, etc.) to produce some sort of valuable output (e.g., a good or service). In determining what level of productive activity to engage in (e.g., how many hours to run the factory, etc.), it compares its cost of engaging in one more unit of activity to the added benefit (revenue) it will receive from doing so. If its so-called “marginal cost” from the additional activity is less than or equal to the “marginal benefit” it will receive, it will engage in the activity; otherwise, it won’t.  

When the business is bearing all the costs and benefits of its actions, this outcome is efficient. The cost of the inputs used in production are determined by the value they could generate in alternative uses. (For example, if a flidget producer could create $4 of value from an ounce of tin, a widget-maker would have to bid at least $4 to win that tin from the flidget-maker.) If a business finds that continued production generates additional revenue (reflective of consumers’ subjective valuation of the business’s additional product) in excess of its added cost (reflective of the value its inputs could create if deployed toward their next-best use), then making more moves productive resources to their highest and best uses, enhancing social welfare. This outcome is “allocatively efficient,” meaning that productive resources have been allocated in a manner that wrings the greatest possible value from them.

Allocative efficiency may not result, though, if the producer is able to foist some of its costs onto others.  Suppose that it costs a producer $4.50 to make an additional widget that he could sell for $5.00. He’d make the widget. But what if producing the widget created pollution that imposed $1 of cost on the producer’s neighbors? In that case, it could be inefficient to produce the widget; the total marginal cost of doing so, $5.50, might well exceed the marginal benefit produced, which could be as low as $5.00. Negative externalities, then, may result in an allocative inefficiency—i.e., a use of resources that produces less total value than some alternative use.

Pigou’s idea was to use taxes to prevent such inefficiencies. If the government were to charge the producer a tax equal to the cost his activity imposed on others ($1 in the above example), then he would capture all the marginal benefit and bear all the marginal cost of his activity. He would thus be motivated to continue his activity only to the point at which its total marginal benefit equaled its total marginal cost. The point of a Pigouvian tax, then, is to achieve allocative efficiency—i.e., to channel productive resources toward their highest and best ends.

When it comes to the negative externality Ms. Kim has identified—an increase in housing prices occasioned by high-tech companies’ hiring of skilled workers—the efficiency case for a Pigouvian tax crumbles. That is because the external cost at issue here is a “pecuniary” externality, a special sort of externality that does not generate inefficiency.

A pecuniary externality is one where the adverse third-party effect consists of an increase in market prices. If that’s the case, the allocative inefficiency that may justify Pigouvian taxes does not exist. There’s no inefficiency from the mere fact that buyers pay more.  Their loss is perfectly offset by a gain to sellers, and—here’s the crucial part—the higher prices channel productive resources toward, not away from, their highest and best ends. High rent levels, for example, signal to real estate developers that more resources should be devoted to creating living spaces within the city. That’s allocatively efficient.

Now, it may well be the case that government policies thwart developers from responding to those salutary price signals. The cities that Ms. Kim says should impose a tax on high-tech employers—Seattle, San Francisco, Austin, New York, and Boulder—have some of the nation’s most restrictive real estate development rules. But that’s a government failure, not a market failure.

In the end, Ms. Kim’s pollution tax analogy fails. The efficiency case for a Pigouvian tax to remedy negative externalities does not apply when, as here, the externality at issue is pecuniary.

For more on pecuniary versus “technological” (non-pecuniary) externalities and appropriate responses thereto, check out Chapter 4 of my recent book, How to Regulate: A Guide for Policymakers.

In the face of an unprecedented surge of demand for bandwidth as Americans responded to COVID-19, the nation’s Internet infrastructure delivered for urban and rural users alike. In fact, since the crisis began in March, there has been no appreciable degradation in either the quality or availability of service. That success story is as much about the network’s robust technical capabilities as it is about the competitive environment that made the enormous private infrastructure investments to build the network possible.

Yet, in spite of that success, calls to blind ISP pricing models to the bandwidth demands of users by preventing firms from employing “usage-based billing” (UBB) have again resurfaced. Today those demands are arriving in two waves: first, in the context of a petition by Charter Communications to employ the practice as the conditions of its merger with Time Warner Cable become ripe for review; and second in the form of complaints about ISPs re-imposing UBB following an end to the voluntary temporary halting of the practice during the first months of the COVID-19 pandemic — a move that was an expansion by ISPs of the Keep Americans Connected Pledge championed by FCC Chairman Ajit Pai.

In particular, critics believe they have found clear evidence to support their repeated claims that UBB isn’t necessary for network management purposes as (they assert) ISPs have long claimed.  Devin Coldewey of TechCrunch, for example, recently asserted that:

caps are completely unnecessary, existing only as a way to squeeze more money from subscribers. Data caps just don’t matter any more…. Think about it: If the internet provider can even temporarily lift the data caps, then there is definitively enough capacity for the network to be used without those caps. If there’s enough capacity, then why did the caps exist in the first place? Answer: Because they make money.

The thing is, though, ISPs did not claim that UBB was about the day-to-day “manage[ment of] network loads.” Indeed, the network management strawman has taken on a life of its own. It turns out that if you follow the thread of articles in an attempt to substantiate the claim (for instance: here, to here, to here, to here), it is just a long line of critics citing to each other’s criticisms of this purported claim by ISPs. But never do they cite to the ISPs themselves making this assertion — only to instances where ISPs offer completely different explanations, coupled with the critics’ claims that such examples show only that ISPs are now changing their tune. In reality, the imposition of usage-based billing is, and has always been, a basic business decision — as it is for every other company that uses it (which is to say: virtually all companies).

What’s UBB really about?

For critics, however, UBB is never just a “basic business decision.” Rather, the only conceivable explanations for UBB are network management and extraction of money. There is no room in this conception of the practice for perfectly straightforward pricing decisions that offer pricing that differs by customers’ usage of the services. Nor does this viewpoint recognize the importance of these pricing practices for long-term network cultivation in the form of investment in increasing capacity to meet the increased demands generated by users.

But to disregard these actual reasons for the use of UBB is to ignore what is economically self-evident.

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

A flat-rate pricing mandate wouldn’t allow pricing structures based on cost recovery. In such a world an ISP couldn’t simply offer a lower price to lighter users for a basic tier and rely on higher revenues from the heaviest users to cover the costs of network investment. Instead, it would have to finance its ability to improve its network to meet the needs of the most demanding users out of higher prices charged to all users, including the least demanding users that make up the vast majority of users on networks today (for example, according to Comcast, 95 percent of its  subscribers use less than 1.2 TB of data monthly).

On this basis, UBB is a sensible (and equitable, as some ISPs note) way to share the cost of building, maintaining, and upgrading the nation’s networks that simultaneously allows ISPs to react to demand changes in the market while enabling consumers to purchase a tier of service commensurate with their level of use. Indeed, charging customers based on the quality and/or amount of a product they use is a benign, even progressive, practice that insulates the majority of consumers from the obligation to cross-subsidize the most demanding customers.

Objections to the use of UBB fall generally into two categories. One stems from the sort of baseline policy misapprehension that it is needed to manage the network, but that fallacy is dispelled above. The other is borne of a simple lack of familiarity with the practice.

Consider that, in the context of Internet services, broadband customers are accustomed to the notion that access to greater data speed is more costly than the alternative, but are underexposed to the related notion of charging based upon broadband data consumption. Below, we’ll discuss the prevalence of UBB across sectors, how it works in the context of broadband Internet service, and the ultimate benefit associated with allowing for a diversity of pricing models among ISPs.

Usage-based pricing in other sectors

To nobody’s surprise, usage-based pricing is common across all sectors of the economy. Anything you buy by the unit, or by weight, is subject to “usage-based pricing.” Thus, this is how we buy apples from the grocery store and gasoline for our cars.

Usage-based pricing need not always be so linear, either. In the tech sector, for instance, when you hop in a ride-sharing service like Uber or Lyft, you’re charged a base fare, plus a rate that varies according to the distance of your trip. By the same token, cloud storage services like Dropbox and Box operate under a “freemium” model in which a basic amount of storage and services is offered for free, while access to higher storage tiers and enhanced services costs increasingly more. In each case the customer is effectively responsible (at least in part) for supporting the service to the extent of her use of its infrastructure.

Even in sectors in which virtually all consumers are obligated to purchase products and where regulatory scrutiny is profound — as is the case with utilities and insurance — non-linear and usage-based pricing are still common. That’s because customers who use more electricity or who drive their vehicles more use a larger fraction of shared infrastructure, whether physical conduits or a risk-sharing platform. The regulators of these sectors recognize that tremendous public good is associated with the persistence of utility and insurance products, and that fairly apportioning the costs of their operations requires differentiating between customers on the basis of their use. In point of fact (as we’ve known at least since Ronald Coase pointed it out in 1946), the most efficient and most equitable pricing structure for such products is a two-part tariff incorporating both a fixed, base rate, as well as a variable charge based on usage.  

Pricing models that don’t account for the extent of customer use are vanishingly rare. “All-inclusive” experiences like Club Med or the Golden Corral all-you-can-eat buffet are the exception and not the rule when it comes to consumer goods. And it is well-understood that such examples adopt effectively regressive pricing — charging everyone a high enough price to ensure that they earn sufficient return from the vast majority of light eaters to offset the occasional losses from the gorgers. For most eaters, in other words, a buffet lunch tends to cost more and deliver less than a menu-based lunch. 

All of which is to say that the typical ISP pricing model — in which charges are based on a generous, and historically growing, basic tier coupled with an additional charge that increases with data use that exceeds the basic allotment — is utterly unremarkable. Rather, the mandatory imposition of uniform or flat-fee pricing would be an aberration.

Aligning network costs with usage

Throughout its history, Internet usage has increased constantly and often dramatically. This ever-growing need has necessitated investment in US broadband infrastructure running into the tens of billions annually. Faced with the need for this investment, UBB is a tool that helps to equitably align network costs with different customers’ usage levels in a way that promotes both access and resilience.

As President Obama’s first FCC Chairman, Julius Genachowski, put it:

Our work has also demonstrated the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost such as usage-based pricing.

Importantly, it is the marginal impact of the highest-usage customers that drives a great deal of those network investment costs. In the case of one ISP, a mere 5 percent of residential users make up over 20 percent of its network usage. Necessarily then, in the absence of UBB and given the constant need for capacity expansion, uniform pricing would typically act to disadvantage low-volume customers and benefit high-volume customers.

Even Tom Wheeler — President Obama’s second FCC Chairman and the architect of utility-style regulation of ISPs — recognized this fact and chose to reject proposals to ban UBB in the 2015 Open Internet Order, explaining that:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. (emphasis added)

When it comes to expanding Internet connectivity, the policy ramifications of uniform pricing are regressive. As such, they run counter to the stated goals of policymakers across the political spectrum insofar as they deter low-volume users — presumably, precisely the marginal users who may be disinclined to subscribe in the first place —  from subscribing by saddling them with higher prices than they would face with capacity pricing. Closing the digital divide means supporting the development of a network that is at once sustainable and equitable on the basis of its scope and use. Mandated uniform pricing accomplishes neither.

Of similarly profound importance is the need to ensure that Internet infrastructure is ready for demand shocks, as we saw with the COVID-19 crisis. Linking pricing to usage gives ISPs the incentive and wherewithal to build and maintain high-capacity networks to cater to the ever-growing expectations of high-volume users, while also encouraging the adoption of network efficiencies geared towards conserving capacity (e.g., caching, downloading at off-peak hours rather than streaming during peak periods).

Contrary to the claims of some that the success of ISPs’ networks during the COVID-19 crisis shows that UBB is unnecessary and extractive, the recent increases in network usage (which may well persist beyond the eventual end of the crisis) demonstrate the benefits of nonlinear pricing models like UBB. Indeed, the consistent efforts to build out the network to serve high-usage customers, funded in part by UBB, redounds not only to the advantage of abnormal users in regular times, but also to the advantage of regular users in abnormal times.

The need for greater capacity along with capacity-conserving efficiencies has been underscored by the scale of the demand shock among high-load users resulting from COVID-19. According to OpenVault, a data use tracking service, the number of “power users” and “extreme power users” utilizing 1TB/month or more and 2TB/month or more jumped 138 percent and 215 percent respectively. Meaning that now, in total, power users represent 10 percent of subscribers across the network, while extreme power users comprise 1.2 percent of subscribers.

Pricing plans predicated on load volume necessarily evolve along with network capacity, but at this moment the application of UBB for monthly loads above 1TB ensures that ISPs maintain an incentive to cater to power users and extreme power users alike. In doing so, ISPs are also ensuring that all users are protected when the Internet’s next abnormal — but, sadly, predictable — event arrives.

At the same time, UBB also helps to facilitate the sort of customer-side network efficiencies that may emerge as especially important during times of abnormally elevated demand. Customers’ usage need not be indifferent to the value of the data they use, and usage-based pricing helps to ensure that data usage aligns not only with costs but also with the data’s value to consumers. In this way the behavior of both ISPs and customers will better reflect the objective realities of the nations’ networks and their limits.

The case for pricing freedom

Finally, it must be noted that ISPs are not all alike, and that the market sustains a range of pricing models across ISPs according to what suits their particular business models, network characteristics, load capacity, and user types (among other things). Consider that even ISPs that utilize UBB almost always offer unlimited data products, while some ISPs choose to adopt uniform pricing to differentiate their offerings. In fact, at least one ISP has moved to uniform billing in light of COVID-19 to provide their customers with “certainty” about their bills.

The mistake isn’t in any given ISP electing a uniform billing structure or a usage-based billing structure; rather it is in proscribing the use of a single pricing structure for all ISPs. Claims that such price controls are necessary because consumers are harmed by UBB ignore its prevalence across the economy, its salutary effect on network access and resilience, and the manner in which it promotes affordability and a sensible allocation of cost recovery across consumers.

Moreover, network costs and traffic demand patterns are dynamic, and the availability of UBB — among other pricing schemes — also allows ISPs to tailor their offerings to those changing conditions in a manner that differentiates them from their competitors. In doing so, those offerings are optimized to be attractive in the moment, while still facilitating network maintenance and expansion in the future.

Where economically viable, more choice is always preferable. The notion that consumers will somehow be harmed if they get to choose Internet services based not only on speed, but also load, is a specious product of the confused and the unfamiliar. The sooner the stigma around UBB is overcome, the better-off the majority of US broadband customers will be.

Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.

While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?

At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.

Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.

Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.

Competition law on a spectrum

To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:

While these classifications are certainly not airtight, this would be my reasoning:

In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs). 

In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc. 

Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement. 

Enforcement

Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.

The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).

The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazon investigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.

Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Android decision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.

What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

The empty quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format. 

This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).

An evolutionary explanation?

The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market? 

I can see at least three potential explanations:

  1. Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
  2. Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
  3. Consumers have opted for closed systems precisely because they are closed.

I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.

Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them. 

For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.

Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3)

For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). 

Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision. 

There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform. 

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire. 

To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.

In our first post, we discussed the weaknesses of an important theoretical underpinning of efforts to expand vertical merger enforcement (including, possibly, the proposed guidelines): the contract/merger equivalency assumption.

In this post we discuss the implications of that assumption and some of the errors it leads to — including some incorporated into the proposed guidelines.

There is no theoretical or empirical justification for more vertical enforcement

Tim Brennan makes a fantastic and regularly overlooked point in his post: If it’s true, as many claim (see, e.g., Steve Salop), that firms can generally realize vertical efficiencies by contracting instead of merging, then it’s also true that they can realize anticompetitive outcomes the same way. While efficiencies have to be merger-specific in order to be relevant to the analysis, so too do harms. But where the assumption is that the outcomes of integration can generally be achieved by the “less-restrictive” means of contracting, that would apply as well to any potential harms, thus negating the transaction-specificity required for enforcement. As Dennis Carlton notes:

There is a symmetry between an evaluation of the harms and benefits of vertical integration. Each must be merger-specific to matter in an evaluation of the merger’s effects…. If transaction costs are low, then vertical integration creates neither benefits nor harms, since everything can be achieved by contract. If transaction costs exist to prevent the achievement of a benefit but not a harm (or vice-versa), then that must be accounted for in a calculation of the overall effect of a vertical merger. (Dennis Carlton, Transaction Costs and Competition Policy)

Of course, this also means that those (like us) who believe that it is not so easy to accomplish by contract what may be accomplished by merger must also consider the possibility that a proposed merger may be anticompetitive because it overcomes an impediment to achieving anticompetitive goals via contract.

There’s one important caveat, though: The potential harms that could arise from a vertical merger are the same as those that would be cognizable under Section 2 of the Sherman Act. Indeed, for a vertical merger to cause harm, it must be expected to result in conduct that would otherwise be illegal under Section 2. This means there is always the possibility of a second bite at the apple when it comes to thwarting anticompetitive conduct. 

The same cannot be said of procompetitive conduct that can arise only through merger if a merger is erroneously prohibited before it even happens

Interestingly, Salop himself — the foremost advocate today for enhanced vertical merger enforcement — recognizes the issue raised by Brennan: 

Exclusionary harms and certain efficiency benefits also might be achieved with vertical contracts and agreements without the need for a vertical merger…. It [] might be argued that the absence of premerger exclusionary contracts implies that the merging firms lack the incentive to engage in conduct that would lead to harmful exclusionary effects. But anticompetitive vertical contracts may face the same types of impediments as procompetitive ones, and may also be deterred by potential Section 1 enforcement. Neither of these arguments thus justify a more or less intrusive vertical merger policy generally. Rather, they are factors that should be considered in analyzing individual mergers. (Salop & Culley, Potential Competitive Effects of Vertical Mergers)

In the same article, however, Salop also points to the reasons why it should be considered insufficient to leave enforcement to Sections 1 and 2, instead of addressing them at their incipiency under Clayton Section 7:

While relying solely on post-merger enforcement might have appealing simplicity, it obscures several key facts that favor immediate enforcement under Section 7.

  • The benefit of HSR review is to prevent the delays and remedial issues inherent in after-the-fact enforcement….
  • There may be severe problems in remedying the concern….
  • Section 1 and Section 2 legal standards are more permissive than Section 7 standards….
  • The agencies might well argue that anticompetitive post-merger conduct was caused by the merger agreement, so that it would be covered by Section 7….

All in all, failure to address these kinds of issues in the context of merger review could lead to significant consumer harm and underdeterrence.

The points are (mostly) well-taken. But they also essentially amount to a preference for more and tougher enforcement against vertical restraints than the judicial interpretations of Sections 1 & 2 currently countenance — a preference, in other words, for the use of Section 7 to bolster enforcement against vertical restraints of any sort (whether contractual or structural).

The problem with that, as others have pointed out in this symposium (see, e.g., Nuechterlein; Werden & Froeb; Wright, et al.), is that there’s simply no empirical basis for adopting a tougher stance against vertical restraints in the first place. Over and over again the empirical research shows that vertical restraints and vertical mergers are unlikely to cause anticompetitive harm: 

In reviewing this literature, two features immediately stand out: First, there is a paucity of support for the proposition that vertical restraints/vertical integration are likely to harm consumers. . . . Second, a far greater number of studies found that the use of vertical restraints in the particular context studied improved welfare unambiguously. (Cooper, et al, Vertical Restrictions and Antitrust Policy: What About the Evidence?)

[W]e did not have a particular conclusion in mind when we began to collect the evidence, and we… are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing, vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view…. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. (Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence)

[Table 1 in this paper] indicates that voluntarily adopted restraints are associated with lower costs, greater consumption, higher stock returns, and better chances of survival. (Daniel O’Brien, The Antitrust Treatment of Vertical Restraint: Beyond the Beyond the Possibility Theorems)

In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets. (GAI Comment on Vertical Mergers)

To the extent that the proposed guidelines countenance heightened enforcement relative to the status quo, they fall prey to the same defect. And while it is unclear from the fairly terse guidelines whether this is animating them, the removal of language present in the 1984 Non-Horizontal Merger Guidelines acknowledging the relative lack of harm from vertical mergers (“[a]lthough non-horizontal mergers are less likely than horizontal mergers to create competitive problems…”) is concerning.  

The shortcomings of orthodox economics and static formal analysis

There is also a further reason to think that vertical merger enforcement may be more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante (i.e., where arrangements among vertical firms are by contract): Our lack of knowledge about the effects of market structure and firm organization on innovation and dynamic competition, and the relative hostility to nonstandard contracting, including vertical integration:

[T]he literature addressing how market structure affects innovation (and vice versa) in the end reveals an ambiguous relationship in which factors unrelated to competition play an important role. (Katz & Shelanski, Mergers and Innovation)

The fixation on the equivalency of the form of vertical integration (i.e., merger versus contract) is likely to lead enforcers to focus on static price and cost effects, and miss the dynamic organizational and informational effects that lead to unexpected, increased innovation across and within firms. 

In the hands of Oliver Williamson, this means that understanding firms in the real world entails taking an organization theory approach, in contrast to the “orthodox” economic perspective:

The lens of contract approach to the study of economic organization is partly complementary but also partly rival to the orthodox [neoclassical economic] lens of choice. Specifically, whereas the latter focuses on simple market exchange, the lens of contract is predominantly concerned with the complex contracts. Among the major differences is that non‐standard and unfamiliar contractual practices and organizational structures that orthodoxy interprets as manifestations of monopoly are often perceived to serve economizing purposes under the lens of contract. A major reason for these and other differences is that orthodoxy is dismissive of organization theory whereas organization theory provides conceptual foundations for the lens of contract. (emphasis added)

We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.

The competition that takes place in the real world and between various groups ultimately depends upon the institution of private contracts, many of which, including the firm itself, are nonstandard. Innovation includes the discovery of new organizational forms and the application of old forms to new contexts. Such contracts prevent or attenuate market failure, moving the market toward what economists would deem a more competitive result. Indeed, as Professor Coase pointed out, many markets deemed “perfectly competitive” are in fact the end result of complex contracts limiting rivalry between competitors. This contractual competition cannot produce perfect results — no human institution ever can. Nonetheless, the result is superior to that which would obtain in a (real) world without nonstandard contracting. These contracts do not depend upon the creation or enhancement of market power and thus do not produce the evils against which antitrust law is directed. (Alan Meese, Price Theory Competition & the Rule of Reason)

Or, as Oliver Williamson more succinctly puts it:

[There is a] rebuttable presumption that nonstandard forms of contracting have efficiency purposes. (Oliver Williamson, The Economic Institutions of Capitalism)

The pinched focus of the guidelines on narrow market definition misses the bigger picture of dynamic competition over time

The proposed guidelines (and the theories of harm undergirding them) focus upon indicia of market power that may not be accurate if assessed in more realistic markets or over more relevant timeframes, and, if applied too literally, may bias enforcement against mergers with dynamic-innovation benefits but static-competition costs.  

Similarly, the proposed guidelines’ enumeration of potential efficiencies doesn’t really begin to cover the categories implicated by the organization of enterprise around dynamic considerations

The proposed guidelines’ efficiencies section notes that:

Vertical mergers bring together assets used at different levels in the supply chain to make a final product. A single firm able to coordinate how these assets are used may be able to streamline production, inventory management, or distribution, or create innovative products in ways that would have been hard to achieve though arm’s length contracts. (emphasis added)

But it is not clear than any of these categories encompasses organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.

As Thomas Jorde and David Teece write:

For innovations to be commercialized, the economic system must somehow assemble all the relevant complementary assets and create a dynamically-efficient interactive system of learning and information exchange. The necessary complementary assets can conceivably be assembled by either administrative or market processes, as when the innovator simply licenses the technology to firms that already own or are willing to create the relevant assets. These organizational choices have received scant attention in the context of innovation. Indeed, the serial model relies on an implicit belief that arm’s-length contracts between unaffiliated firms in the vertical chain from research to customer will suffice to commercialize technology. In particular, there has been little consideration of how complex contractual arrangements among firms can assist commercialization — that is, translating R&D capability into profitable new products and processes….

* * *

But in reality, the market for know-how is riddled with imperfections. Simple unilateral contracts where technology is sold for cash are unlikely to be efficient. Complex bilateral and multilateral contracts, internal organization, or various hybrid structures are often required to shore up obvious market failures and create procompetitive efficiencies. (Jorde & Teece, Rule of Reason Analysis of Horizontal Arrangements: Agreements Designed to Advance Innovation and Commercialize Technology) (emphasis added)

When IP protection for a given set of valuable pieces of “know-how” is strong — easily defendable, unique patents, for example — firms can rely on property rights to efficiently contract with vertical buyers and sellers. But in cases where the valuable “know how” is less easily defended as IP — e.g. business process innovation, managerial experience, distributed knowledge, corporate culture, and the like — the ability to partially vertically integrate through contract becomes more difficult, if not impossible. 

Perhaps employing these assets is part of what is meant in the draft guidelines by “streamline.” But the very mention of innovation only in the technological context of product innovation is at least some indication that organizational innovation is not clearly contemplated.  

This is a significant lacuna. The impact of each organizational form on knowledge transfers creates a particularly strong division between integration and contract. As Enghin Atalay, Ali Hortaçsu & Chad Syverson point out:

That vertical integration is often about transfers of intangible inputs rather than physical ones may seem unusual at first glance. However, as observed by Arrow (1975) and Teece (1982), it is precisely in the transfer of nonphysical knowledge inputs that the market, with its associated contractual framework, is most likely to fail to be a viable substitute for the firm. Moreover, many theories of the firm, including the four “elemental” theories as identified by Gibbons (2005), do not explicitly invoke physical input transfers in their explanations for vertical integration. (Enghin Atalay, et al., Vertical Integration and Input Flows) (emphasis added)

There is a large economics and organization theory literature discussing how organizations are structured with respect to these sorts of intangible assets. And the upshot is that, while we start — not end, as some would have it — with the Coasian insight that firm boundaries are necessarily a function of production processes and not a hard limit, we quickly come to realize that it is emphatically not the case that integration-via-contract and integration-via-merger are always, or perhaps even often, viable substitutes.

Conclusion

The contract/merger equivalency assumption, coupled with a “least-restrictive alternative” logic that favors contract over merger, puts a thumb on the scale against vertical mergers. While the proposed guidelines as currently drafted do not necessarily portend the inflexible, formalistic application of this logic, they offer little to guide enforcers or courts away from the assumption in the important (and perhaps numerous) cases where it is unwarranted.   

Today would have been Henry Manne’s 90th birthday. When he passed away in 2015 he left behind an immense and impressive legacy. In 1991, at the inaugural meeting of the American Law & Economics Association (ALEA), Manne was named a Life Member of ALEA and, along with Nobel Laureate Ronald Coase, and federal appeals court judges Richard Posner and Guido Calabresi, one of the four Founders of Law and Economics. The organization I founded, the International Center for Law & Economics is dedicated to his memory, along with that of his great friend and mentor, UCLA economist Armen Alchian.

Manne is best known for his work in corporate governance and securities law and regulation, of course. But sometimes forgotten is that his work on the market for corporate control was motivated by concerns about analytical flaws in merger enforcement. As former FTC commissioners Maureen Ohlhausen and Joshua Wright noted in a 2015 dissenting statement:

The notion that the threat of takeover would induce current managers to improve firm performance to the benefit of shareholders was first developed by Henry Manne. Manne’s pathbreaking work on the market for corporate control arose out of a concern that antitrust constraints on horizontal mergers would distort its functioning. See Henry G. Manne, Mergers and the Market for Corporate Control, 73 J. POL. ECON. 110 (1965).

But Manne’s focus on antitrust didn’t end in 1965. Moreover, throughout his life he was a staunch critic of misguided efforts to expand the power of government, especially when these efforts claimed to have their roots in economic reasoning — which, invariably, was hopelessly flawed. As his obituary notes:

In his teaching, his academic writing, his frequent op-eds and essays, and his work with organizations like the Cato Institute, the Liberty Fund, the Institute for Humane Studies, and the Mont Pèlerin Society, among others, Manne advocated tirelessly for a clearer understanding of the power of markets and competition and the importance of limited government and economically sensible regulation.

Thus it came to be, in 1974, that Manne was called to testify before the Senate Judiciary Committee, Subcommittee on Antitrust and Monopoly, on Michigan Senator Philip A. Hart’s proposed Industrial Reorganization Act. His testimony is a tour de force, and a prescient rejoinder to the faddish advocates of today’s “hipster antitrust”— many of whom hearken longingly back to the antitrust of the 1960s and its misguided “gurus.”

Henry Manne’s trenchant testimony critiquing the Industrial Reorganization Act and its (ostensible) underpinnings is reprinted in full in this newly released ICLE white paper (with introductory material by Geoffrey Manne):

Henry G. Manne: Testimony on the Proposed Industrial Reorganization Act of 1973 — What’s Hip (in Antitrust) Today Should Stay Passé

Sen. Hart proposed the Industrial Reorganization Act in order to address perceived problems arising from industrial concentration. The bill was rooted in the belief that industry concentration led inexorably to monopoly power; that monopoly power, however obtained, posed an inexorable threat to freedom and prosperity; and that the antitrust laws (i.e., the Sherman and Clayton Acts) were insufficient to address the purported problems.

That sentiment — rooted in the reflexive application of the (largely-discredited structure-conduct-performance (SCP) paradigm) — had already become largely passé among economists in the 70s, but it has resurfaced today as the asserted justification for similar (although less onerous) antitrust reform legislation and the general approach to antitrust analysis commonly known as “hipster antitrust.”

The critiques leveled against the asserted economic underpinnings of efforts like the Industrial Reorganization Act are as relevant today as they were then. As Henry Manne notes in his testimony:

To be successful in this stated aim [“getting the government out of the market”] the following dreams would have to come true: The members of both the special commission and the court established by the bill would have to be satisfied merely to complete their assigned task and then abdicate their tremendous power and authority; they would have to know how to satisfactorily define and identify the limits of the industries to be restructured; the Government’s regulation would not sacrifice significant efficiencies or economies of scale; and the incentive for new firms to enter an industry would not be diminished by the threat of a punitive response to success.

The lessons of history, economic theory, and practical politics argue overwhelmingly against every one of these assumptions.

Both the subject matter of and impetus for the proposed bill (as well as Manne’s testimony explaining its economic and political failings) are eerily familiar. The preamble to the Industrial Reorganization Act asserts that

competition… preserves a democratic society, and provides an opportunity for a more equitable distribution of wealth while avoiding the undue concentration of economic, social, and political power; [and] the decline of competition in industries with oligopoly or monopoly power has contributed to unemployment, inflation, inefficiency, an underutilization of economic capacity, and the decline of exports….

The echoes in today’s efforts to rein in corporate power by adopting structural presumptions are unmistakable. Compare, for example, this language from Sen. Klobuchar’s Consolidation Prevention and Competition Promotion Act of 2017:

[C]oncentration that leads to market power and anticompetitive conduct makes it more difficult for people in the United States to start their own businesses, depresses wages, and increases economic inequality;

undue market concentration also contributes to the consolidation of political power, undermining the health of democracy in the United States; [and]

the anticompetitive effects of market power created by concentration include higher prices, lower quality, significantly less choice, reduced innovation, foreclosure of competitors, increased entry barriers, and monopsony power.

Remarkably, Sen. Hart introduced his bill as “an alternative to government regulation and control.” Somehow, it was the antithesis of “government control” to introduce legislation that, in Sen. Hart’s words,

involves changing the life styles of many of our largest corporations, even to the point of restructuring whole industries. It involves positive government action, not to control industry but to restore competition and freedom of enterprise in the economy

Like today’s advocates of increased government intervention to design the structure of the economy, Sen. Hart sought — without a trace of irony — to “cure” the problem of politicized, ineffective enforcement by doubling down on the power of the enforcers.

Henry Manne was having none of it. As he pointedly notes in his testimony, the worst problems of monopoly power are of the government’s own making. The real threat to democracy, freedom, and prosperity is the political power amassed in the bureaucratic apparatus that frequently confers monopoly, at least as much as the monopoly power it spawns:

[I]t takes two to make that bargain [political protection and subsidies in exchange for lobbying]. And as we look around at various industries we are constrained to ask who has not done this. And more to the point, who has not succeeded?

It is unhappily almost impossible to name a significant industry in the United States that has not gained some degree of protection from the rigors of competition from Federal, State or local governments.

* * *

But the solution to inefficiencies created by Government controls cannot lie in still more controls. The politically responsible task ahead for Congress is to dismantle our existing regulatory monster before it strangles us.

We have spawned a gigantic bureaucracy whose own political power threatens the democratic legitimacy of government.

We are rapidly moving toward the worst features of a centrally planned economy with none of the redeeming political, economic, or ethical features usually claimed for such systems.

The new white paper includes Manne’s testimony in full, including his exchange with Sen. Hart and committee staffers following his prepared remarks.

It is, sadly, nearly as germane today as it was then.

One final note: The subtitle for the paper is a reference to the song “What Is Hip?” by Tower of Power. Its lyrics are decidedly apt:

You done went and found you a guru,

In your effort to find you a new you,

And maybe even managed

To raise your conscious level.

While you’re striving to find the right road,

There’s one thing you should know:

What’s hip today

Might become passé.

— Tower of Power, What Is Hip? (Emilio Castillo, John David Garibaldi & Stephen M. Kupka, What Is Hip? (Bob-A-Lew Songs 1973), from the album TOWER OF POWER (Warner Bros. 1973))

And here’s the song, in all its glory:

 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics); and Dirk Auer, (Senior Fellow of Law & Economics, ICLE)]

Back in 2012, Covidien, a large health care products company and medical device manufacturer, purchased Newport Medical Instruments, a small ventilator developer and manufacturer. (Covidien itself was subsequently purchased by Medtronic in 2015).

Eight years later, in the midst of the coronavirus pandemic, the New York Times has just published an article revisiting the Covidien/Newport transaction, and questioning whether it might have contributed to the current shortage of ventilators.

The article speculates that Covidien’s purchase of Newport, and the subsequent discontinuation of Newport’s “Aura” ventilator — which was then being developed by Newport under a government contract — delayed US government efforts to procure mechanical ventilators until the second half of 2020 — too late to treat the first wave of COVID-19 patients:

And then things suddenly veered off course. A multibillion-dollar maker of medical devices bought the small California company that had been hired to design the new machines. The project ultimately produced zero ventilators.

That failure delayed the development of an affordable ventilator by at least half a decade, depriving hospitals, states and the federal government of the ability to stock up.

* * *

Today, with the coronavirus ravaging America’s health care system, the nation’s emergency-response stockpile is still waiting on its first shipment.

The article has generated considerable interest not so much for what it suggests about government procurement policies or for its relevance to the ventilator shortages associated with the current pandemic, but rather for its purported relevance to ongoing antitrust debates and the arguments put forward by “antitrust populists” and others that merger enforcement in the US is dramatically insufficient. 

Only a single sentence in the article itself points to a possible antitrust story — and it does nothing more than report unsubstantiated speculation from unnamed “government officials” and rival companies: 

Government officials and executives at rival ventilator companies said they suspected that Covidien had acquired Newport to prevent it from building a cheaper product that would undermine Covidien’s profits from its existing ventilator business.

Nevertheless, and right on cue, various antitrust scholars quickly framed the deal as a so-called “killer acquisition” (see also here and here):

Unsurprisingly, politicians were also quick to jump on the bandwagon. David Cicilline, the powerful chairman of the House Antitrust Subcommittee, opined that:

And FTC Commissioner Rebecca Kelly Slaughter quickly called for a retrospective review of the deal:

The public reporting on this acquisition raises important questions about the review of this deal. We should absolutely be looking back to figure out what happened.

These “hot takes” raise a crucial issue. The New York Times story opened the door to a welter of hasty conclusions offered to support the ongoing narrative that antitrust enforcement has failed us — in this case quite literally at the cost of human lives. But are any of these claims actually supportable?

Unfortunately, the competitive realities of the mechanical ventilator industry, as well as a more clear-eyed view of what was likely going on with the failed government contract at the heart of the story, simply do not support the “killer acquisition” story.

What is a “killer acquisition”…?

Let’s take a step back. Because monopoly profits are, by definition, higher than joint duopoly profits (all else equal), economists have long argued that incumbents may find it profitable to acquire smaller rivals in order to reduce competition and increase their profits. More specifically, incumbents may be tempted to acquire would-be entrants in order to prevent them from introducing innovations that might hurt the incumbent’s profits.

For this theory to have any purchase, however, a number of conditions must hold. Most importantly, as Colleen Cunningham, Florian Ederer, and Song Ma put it in an influential paper

“killer acquisitions” can only occur when the entrepreneur’s project overlaps with the acquirer’s existing product…. [W]ithout any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur… because, without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

Moreover, the authors add that:

Successfully developing a new product draws consumer demand and profits away equally from all existing products. An acquiring incumbent is hurt more by such cannibalization when he is a monopolist (i.e., the new product draws demand away only from his own existing product) than when he already faces many other existing competitors (i.e., cannibalization losses are spread over many firms). As a result, as the number of existing competitors increases, the replacement effect decreases and the acquirer’s development decisions become more similar to those of the entrepreneur

Finally, the “killer acquisition” terminology is appropriate only when the incumbent chooses to discontinue its rival’s R&D project:

If incumbents face significant existing competition, acquired projects are not significantly more frequently discontinued than independent projects. Thus, more competition deters incumbents from acquiring and terminating the projects of potential future competitors, which leads to more competition in the future.

…And what isn’t a killer acquisition?

What is left out of this account of killer acquisitions is the age-old possibility that an acquirer purchases a rival precisely because it has superior know-how or a superior governance structure that enables it to realize greater return and more productivity than its target. In the case of a so-called killer acquisition, this means shutting down a negative ROI project and redeploying resources to other projects or other uses — including those that may not have any direct relation to the discontinued project. 

Such “synergistic” mergers are also — like allegedly “killer” mergers — likely to involve acquirers and targets in the same industry and with technological overlap between their R&D projects; it is in precisely these situations that the acquirer is likely to have better knowledge than the target’s shareholders that the target is undervalued because of poor governance rather than exogenous, environmental factors.  

In other words, whether an acquisition is harmful or not — as the epithet “killer” implies it is — depends on whether it is about reducing competition from a rival, on the one hand, or about increasing the acquirer’s competitiveness by putting resources to more productive use, on the other.

As argued below, it is highly unlikely that Covidien’s acquisition of Newport could be classified as a “killer acquisition.” There is thus nothing to suggest that the merger materially impaired competition in the mechanical ventilator market, or that it measurably affected the US’s efforts to fight COVID-19.

The market realities of the ventilator market and its implications for the “killer acquisition” story

1. The mechanical ventilator market is highly competitive

As explained above, “killer acquisitions” are less likely to occur in competitive markets. Yet the mechanical ventilator industry is extremely competitive. 

A number of reports conclude that there is significant competition in the industry. One source cites at least seven large producers. Another report cites eleven large players. And, in the words of another report:

Medical ventilators market competition is intense. 

The conclusion that the mechanical ventilator industry is highly competitive is further supported by the fact that the five largest producers combined reportedly hold only 50% of the market. In other words, available evidence suggests that none of these firms has anything close to a monopoly position. 

This intense competition, along with the small market shares of the merging firms, likely explains why the FTC declined to open an in-depth investigation into Covidien’s acquisition of Newport.

Similarly, following preliminary investigations, neither the FTC nor the European Commission saw the need for an in-depth look at the ventilator market when they reviewed Medtronic’s subsequent acquisition of Covidien (which closed in 2015). Although Medtronic did not produce any mechanical ventilators before the acquisition, authorities (particularly the European Commission) could nevertheless have analyzed that market if Covidien’s presumptive market share was particularly high. The fact that they declined to do so tends to suggest that the ventilator market was relatively unconcentrated.

2. The value of the merger was too small

A second strong reason to believe that Covidien’s purchase of Newport wasn’t a killer acquisition is the acquisition’s value of $103 million

Indeed, if it was clear that Newport was about to revolutionize the ventilator market, then Covidien would likely have been made to pay significantly more than $103 million to acquire it. 

As noted above, the crux of the “killer acquisition” theory is that incumbents can induce welfare-reducing acquisitions by offering to acquire their rivals for significantly more than the present value of their rivals’ expected profits. Because an incumbent undertaking a “killer” takeover expects to earn monopoly profits as a result of the transaction, it can offer a substantial premium and still profit from its investment. It is this basic asymmetry that drives the theory.

Indeed, as a recent article by Kevin Bryan and Erik Hovenkamp notes, an acquisition value out of line with current revenues may be an indicator of the significance of a pending acquisition in which enforcers may not actually know the value of the target’s underlying technology: 

[Where] a court may lack the expertise to [assess the commercial significance of acquired technology]…, the transaction value… may provide a reasonable proxy. Intuitively, if the startup is a relatively small company with relatively few sales to its name, then a very high acquisition price may reasonably suggest that the startup technology has significant promise.

The strategy only works, however, if the target firm’s shareholders agree that share value properly reflects only “normal” expected profits, and not that the target is poised to revolutionize its market with a uniquely low-cost or high-quality product. Relatively low acquisition prices relative to market size, therefore, tend to reflect low (or normal) expected profits, and a low perceived likelihood of radical innovations occurring.

We can apply this reasoning to Covidien’s acquisition of Newport: 

  • Precise and publicly available figures concerning the mechanical ventilator market are hard to come by. Nevertheless, one estimate finds that the global ventilator market was worth $2.715 billion in 2012. Another report suggests that the global market was worth $4.30 billion in 2018; still another that it was worth $4.58 billion in 2019.
  • As noted above, Covidien reported to the SEC that it paid $103 million to purchase Newport (a firm that produced only ventilators and apparently had no plans to branch out). 
  • For context, at the time of the acquisition Covidien had annual sales of $11.8 billion overall, and $743 million in sales of its existing “Airways and Ventilation Products.”

If the ventilator market was indeed worth billions of dollars per year, then the comparatively small $108 million paid by Covidien — small even relative to Covidien’s own share of the market — suggests that, at the time of the acquisition, it was unlikely that Newport was poised to revolutionize the market for mechanical ventilators (for instance, by successfully bringing its Aura ventilator to market). 

The New York Times article claimed that Newport’s ventilators would be sold (at least to the US government) for $3,000 — a substantial discount from the reportedly then-going rate of $10,000. If selling ventilators at this price seemed credible at the time, then Covidien — as well as Newport’s shareholders — knew that Newport was about to achieve tremendous cost savings, enabling it to offer ventilators not only to the the US government, but to purchasers around the world, at an irresistibly attractive — and profitable — price.

Ventilators at the time typically went for about $10,000 each, and getting the price down to $3,000 would be tough. But Newport’s executives bet they would be able to make up for any losses by selling the ventilators around the world.

“It would be very prestigious to be recognized as a supplier to the federal government,” said Richard Crawford, who was Newport’s head of research and development at the time. “We thought the international market would be strong, and there is where Newport would have a good profit on the product.”

If achievable, Newport thus stood to earn a substantial share of the profits in a multi-billion dollar industry. 

Of course, it is necessary to apply a probability to these numbers: Newport’s ventilator was not yet on the market, and had not yet received FDA approval. Nevertheless, if the Times’ numbers seemed credible at the time, then Covidien would surely have had to offer significantly more than $108 million in order to induce Newport’s shareholders to part with their shares.

Given the low valuation, however, as well as the fact that Newport produced other ventilators — and continues to do so to this day, there is no escaping the fact that everyone involved seemed to view Newport’s Aura ventilator as nothing more than a moonshot with, at best, a low likelihood of success. 

Curically, this same reasoning explains why it shouldn’t surprise anyone that the project was ultimately discontinued; recourse to a “killer acquisition” theory is hardly necessary.

3. Lessons from Covidien’s ventilator product decisions  

The killer acquisition claims are further weakened by at least four other important pieces of information: 

  1.  Covidien initially continued to develop Newport’s Aura ventilator, and continued to develop and sell Newport’s other ventilators.
  2. There was little overlap between Covidien and Newport’s ventilators — or, at the very least, they were highly differentiated
  3. Covidien appears to have discontinued production of its own portable ventilator in 2014
  4. The Newport purchase was part of a billion dollar series of acquisitions seemingly aimed at expanding Covidien’s in-hospital (i.e., not-portable) device portfolio

Covidien continued to develop and sell Newport’s ventilators

For a start, while the Aura line was indeed discontinued by Covidien, the timeline is important. The acquisition of Newport by Covidien was announced in March 2012, approved by the FTC in April of the same year, and the deal was closed on May 1, 2012.

However, as the FDA’s 510(k) database makes clear, Newport submitted documents for FDA clearance of the Aura ventilator months after its acquisition by Covidien (June 29, 2012, to be precise). And the Aura received FDA 510(k) clearance on November 9, 2012 — many months after the merger.

It would have made little sense for Covidien to invest significant sums in order to obtain FDA clearance for a project that it planned to discontinue (the FDA routinely requires parties to actively cooperate with it, even after 510(k) applications are submitted). 

Moreover, if Covidien really did plan to discreetly kill off the Aura ventilator, bungling the FDA clearance procedure would have been the perfect cover under which to do so. Yet that is not what it did.

Covidien continued to develop and sell Newport’s other ventilators

Second, and just as importantly, Covidien (and subsequently Medtronic) continued to sell Newport’s other ventilators. The Newport e360 and HT70 are still sold today. Covidien also continued to improve these products: it appears to have introduced an improved version of the Newport HT70 Plus ventilator in 2013.

If eliminating its competitor’s superior ventilators was the only goal of the merger, then why didn’t Covidien also eliminate these two products from its lineup, rather than continue to improve and sell them? 

At least part of the answer, as will be seen below, is that there was almost no overlap between Covidien and Newport’s product lines.

There was little overlap between Covidien’s and Newport’s ventilators

Third — and perhaps the biggest flaw in the killer acquisition story — is that there appears to have been very little overlap between Covidien and Newport’s ventilators. 

This decreases the likelihood that the merger was a killer acquisition. When two products are highly differentiated (or not substitutes at all), sales of the first are less likely to cannibalize sales of the other. As Florian Ederer and his co-authors put it:

Importantly, without any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur, neither to “Acquire to Kill” nor to “Acquire to Continue.” This is because without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

A quick search of the FDA’s 510(k) database reveals that Covidien has three approved lines of ventilators: the Puritan Bennett 980, 840, and 540 (apparently essentially the same as the PB560, the plans to which Medtronic recently made freely available in order to facilitate production during the current crisis). The same database shows that these ventilators differ markedly from Newport’s ventilators (particularly the Aura).

In particular, Covidien manufactured primarily traditional, invasive ICU ventilators (except for the PB540, which is potentially a substitute for the Newport HT70), while Newport made much-more-portable ventilators, suitable for home use (notably the Aura, HT50 and HT70 lines). 

Under normal circumstances, critical care and portable ventilators are not substitutes. As the WHO website explains, portable ventilators are:

[D]esigned to provide support to patients who do not require complex critical care ventilators.

A quick glance at Medtronic’s website neatly illustrates the stark differences between these two types of devices:

This is not to say that these devices do not have similar functionalities, or that they cannot become substitutes in the midst of a coronavirus pandemic. However, in normal times (as was the case when Covidien acquired Newport), hospitals likely did not view these devices as substitutes.

The conclusion that Covidien and Newport’s ventilator were not substitutes finds further support in documents and statements released at the time of the merger. For instance, Covidien’s CEO explained that:

This acquisition is consistent with Covidien’s strategy to expand into adjacencies and invest in product categories where it can develop a global competitive advantage.

And that:

Newport’s products and technology complement our current portfolio of respiratory solutions and will broaden our ventilation platform for patients around the world, particularly in emerging markets.

In short, the fact that almost all of Covidien and Newport’s products were not substitutes further undermines the killer acquisition story. It also tends to vindicate the FTC’s decision to rapidly terminate its investigation of the merger.

Covidien appears to have discontinued production of its own portable ventilator in 2014

Perhaps most tellingly: It appears that Covidien discontinued production of its own competing, portable ventilator, the Puritan Bennett 560, in 2014.

The product is reported on the company’s 2011, 2012 and 2013 annual reports:

Airway and Ventilation Products — airway, ventilator, breathing systems and inhalation therapy products. Key products include: the Puritan Bennett™ 840 line of ventilators; the Puritan Bennett™ 520 and 560 portable ventilator….

(The PB540 was launched in 2009; the updated PB560 in 2010. The PB520 was the EU version of the device, launched in 2011).

But in 2014, the PB560 was no longer listed among the company’s ventilator products:  

Airway & Ventilation, which primarily includes sales of airway, ventilator and inhalation therapy products and breathing systems.

Key airway & ventilation products include: the Puritan Bennett™ 840 and 980 ventilators, the Newport™ e360 and HT70 ventilators….

Nor — despite its March 31 and April 1 “open sourcing” of the specifications and software necessary to enable others to produce the PB560 — did Medtronic appear to have restarted production, and the company did not mention the device in its March 18 press release announcing its own, stepped-up ventilator production plans.

Surely if Covidien had intended to capture the portable ventilator market by killing off its competition it would have continued to actually sell its own, competing device. The fact that the only portable ventilators produced by Covidien by 2014 were those it acquired in the Newport deal strongly suggests that its objective in that deal was the acquisition and deployment of Newport’s viable and profitable technologies — not the abandonment of them. This, in turn, suggests that the Aura was not a viable and profitable technology.

(Admittedly we are unable to determine conclusively that either Covidien or Medtronic stopped producing the PB520/540/560 series of ventilators. But our research seems to indicate strongly that this is indeed the case).

Putting the Newport deal in context

Finally, although not dispositive, it seems important to put the Newport purchase into context. In the same year as it purchased Newport, Covidien paid more than a billion dollars to acquire five other companies, as well — all of them primarily producing in-hospital medical devices. 

That 2012 spending spree came on the heels of a series of previous medical device company acquisitions, apparently totally some four billion dollars. Although not exclusively so, the acquisitions undertaken by Covidien seem to have been primarily targeted at operating room and in-hospital monitoring and treatment — making the putative focus on cornering the portable (home and emergency) ventilator market an extremely unlikely one. 

By the time Covidien was purchased by Medtronic the deal easily cleared antitrust review because of the lack of overlap between the company’s products, with Covidien’s focusing predominantly on in-hospital, “diagnostic, surgical, and critical care” and Medtronic’s on post-acute care.

Newport misjudged the costs associated with its Aura project; Covidien was left to pick up the pieces

So why was the Aura ventilator discontinued?

Although it is almost impossible to know what motivated Covidien’s executives, the Aura ventilator project clearly suffered from many problems. 

The Aura project was intended to meet the requirements of the US government’s BARDA program (under the auspices of the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority). In short, the program sought to create a stockpile of next generation ventilators for emergency situations — including, notably, pandemics. The ventilator would thus have to be designed for events where

mass casualties may be expected, and when shortages of experienced health care providers with respiratory support training, and shortages of ventilators and accessory components may be expected.

The Aura ventilator would thus sit somewhere between Newport’s two other ventilators: the e360 which could be used in pediatric care (for newborns smaller than 5kg) but was not intended for home care use (or the extreme scenarios envisioned by the US government); and the more portable HT70 which could be used in home care environments, but not for newborns. 

Unfortunately, the Aura failed to achieve this goal. The FDA’s 510(k) clearance decision clearly states that the Aura was not intended for newborns:

The AURA family of ventilators is applicable for infant, pediatric and adult patients greater than or equal to 5 kg (11 lbs.).

A press release issued by Medtronic confirms that

the company was unable to secure FDA approval for use in neonatal populations — a contract requirement.

And the US Government RFP confirms that this was indeed an important requirement:

The device must be able to provide the same standard of performance as current FDA pre-market cleared portable ventilators and shall have the following additional characteristics or features: 

Flexibility to accommodate a wide patient population range from neonate to adult.

Newport also seems to have been unable to deliver the ventilator at the low price it had initially forecasted — a common problem for small companies and/or companies that undertake large R&D programs. It also struggled to complete the project within the agreed-upon deadlines. As the Medtronic press release explains:

Covidien learned that Newport’s work on the ventilator design for the Government had significant gaps between what it had promised the Government and what it could deliverboth in terms of being able to achieve the cost of production specified in the contract and product features and performance. Covidien management questioned whether Newport’s ability to complete the project as agreed to in the contract was realistic.

As Jason Crawford, an engineer and tech industry commentator, put it:

Projects fail all the time. “Supplier risk” should be a standard checkbox on anyone’s contingency planning efforts. This is even more so when you deliberately push the price down to 30% of the market rate. Newport did not even necessarily expect to be profitable on the contract.

The above is mostly Covidien’s “side” of the story, of course. But other pieces of evidence lend some credibility to these claims:

  • Newport agreed to deliver its Aura ventilator at a per unit cost of less than $3000. But, even today, this seems extremely ambitious. For instance, the WHO has estimated that portable ventilators cost between $3,300 and $13,500. If Newport could profitably sell the Aura at such a low price, then there was little reason to discontinue it (readers will recall the development of the ventilator was mostly complete when Covidien put a halt to the project).
  • Covidien/Newport is not the only firm to have struggled to offer suitable ventilators at such a low price. Philips (which took Newport’s place after the government contract fell through) also failed to achieve this low price. Rather than the $2,000 price sought in the initial RFP, Philips ultimately agreed to produce the ventilators for $3,280. But it has not yet been able to produce a single ventilator under the government contract at that price.
  • Covidien has repeatedly been forced to recall some of its other ventilators ( here, here and here) — including the Newport HT70. And rival manufacturers have also faced these types of issues (for example, here and here). 

Accordingly, Covidien may well have preferred to cut its losses on the already problem-prone Aura project, before similar issues rendered it even more costly. 

In short, while it is impossible to prove that these development issues caused Covidien to pull the plug on the Aura project, it is certainly plausible that they did. This further supports the hypothesis that Covidien’s acquisition of Newport was not a killer acquisition. 

Ending the Aura project might have been an efficient outcome

As suggested above, moreover, it is entirely possible that Covidien was better able to realize the poor prospects of Newport’s Aura project and also better organized to enable it to make the requisite decision to abandon the project.

A small company like Newport faces greater difficulties abandoning entrepreneurial projects because doing so can impair a privately held firm’s ability to raise funds for subsequent projects.

Moreover, the relatively large share of revue and reputation that Newport — worth $103 million in 2012, versus Covidien’s $11.8 billion — would have realized from fulfilling a substantial US government project could well have induced it to overestimate the project’s viability and to undertake excessive risk in the (vain) hope of bringing the project to fruition.  

While there is a tendency among antitrust scholars, enforcers, and practitioners to look for (and find…) antitrust-related rationales for mergers and other corporate conduct, it remains the case that most corporate control transactions (such as mergers) are driven by the acquiring firm’s expectation that it can manage more efficiently. As Henry G. Manne put it in his seminal article, Mergers and the Market for Corporate Control (1965): 

Since, in a world of uncertainty, profitable transactions will be entered into more often by those whose information is relatively more reliable, it should not surprise us that mergers within the same industry have been a principal form of changing corporate control. Reliable information is often available to suppliers and customers as well. Thus many vertical mergers may be of the control takeover variety rather than of the “foreclosure of competitors” or scale-economies type.

Of course, the same information that renders an acquiring firm in the same line of business knowledgeable enough to operate a target more efficiently could also enable it to effect a “killer acquisition” strategy. But the important point is that a takeover by a firm with a competing product line, after which the purchased company’s product line is abandoned, is at least as consistent with a “market for corporate control” story as with a “killer acquisition” story.

Indeed, as Florian Ederer himself noted with respect to the Covidien/Newport merger, 

“Killer acquisitions” can have a nefarious image, but killing off a rival’s product was probably not the main purpose of the transaction, Ederer said. He raised the possibility that Covidien decided to kill Newport’s innovation upon realising that the development of the devices would be expensive and unlikely to result in profits.

Concluding remarks

In conclusion, Covidien’s acquisition of Newport offers a cautionary tale about reckless journalism, “blackboard economics,” and government failure.

Reckless journalism because the New York Times clearly failed to do the appropriate due diligence for its story. Its journalists notably missed (or deliberately failed to mention) a number of critical pieces of information — such as the hugely important fact that most of Covidien’s and Newport’s products did not overlap, or the fact that there were numerous competitors in the highly competitive mechanical ventilator industry. 

And yet, that did not stop the authors from publishing their extremely alarming story, effectively suggesting that a small medical device merger materially contributed to the loss of many American lives.

The story also falls prey to what Ronald Coase called “blackboard economics”:

What is studied is a system which lives in the minds of economists but not on earth. 

Numerous commentators rushed to fit the story to their preconceived narratives, failing to undertake even a rudimentary examination of the underlying market conditions before they voiced their recriminations. 

The only thing that Covidien and Newport’s merger ostensibly had in common with the killer acquisition theory was the fact that a large firm purchased a small rival, and that the one of the small firm’s products was discontinued. But this does not even begin to meet the stringent conditions that must be fulfilled for the theory to hold water. Unfortunately, critics appear to have completely ignored all contradicting evidence. 

Finally, what the New York Times piece does offer is a chilling tale of government failure.

The inception of the US government’s BARDA program dates back to 2008 — twelve years before the COVID-19 pandemic hit the US. 

The collapse of the Aura project is no excuse for the fact that, more than six years after the Newport contract fell through, the US government still has not obtained the necessary ventilators. Questions should also be raised about the government’s decision to effectively put all of its eggs in the same basket — twice. If anything, it is thus government failure that was the real culprit. 

And yet the New York Times piece and the critics shouting “killer acquisition!” effectively give the US government’s abject failure here a free pass — all in the service of pursuing their preferred “killer story.”

Following is the second in a series of posts on my forthcoming book, How to Regulate: A Guide for Policy Makers (Cambridge Univ. Press 2017).  The initial post is here.

As I mentioned in my first post, How to Regulate examines the market failures (and other private ordering defects) that have traditionally been invoked as grounds for government regulation.  For each such defect, the book details the adverse “symptoms” produced, the underlying “disease” (i.e., why those symptoms emerge), the range of available “remedies,” and the “side effects” each remedy tends to generate.  The first private ordering defect the book addresses is the externality.

I’ll never forget my introduction to the concept of externalities.  P.J. Hill, my much-beloved economics professor at Wheaton College, sauntered into the classroom eating a giant, juicy apple.  As he lectured, he meandered through the rows of seats, continuing to chomp on that enormous piece of fruit.  Every time he took a bite, juice droplets and bits of apple fell onto students’ desks.  Speaking with his mouth full, he propelled fruit flesh onto students’ class notes.  It was disgusting.

It was also quite effective.  Professor Hill was making the point (vividly!) that some activities impose significant effects on bystanders.  We call those effects “externalities,” he explained, because they are experienced by people who are outside the process that creates them.  When the spillover effects are adverse—costs—we call them “negative” externalities.  “Positive” externalities are spillovers of benefits.  Air pollution is a classic example of a negative externality.  Landscaping one’s yard, an activity that benefits one’s neighbors, generates a positive externality.

An obvious adverse effect (“symptom”) of externalities is unfairness.  It’s not fair for a factory owner to capture the benefits of its production while foisting some of the cost onto others.  Nor is it fair for a homeowner’s neighbors to enjoy her spectacular flower beds without contributing to their creation or maintenance.

A graver symptom of externalities is “allocative inefficiency,” a failure to channel productive resources toward the uses that will wring the greatest possible value from them.  When an activity involves negative externalities, people tend to do too much of it—i.e., to devote an inefficiently high level of productive resources to the activity.  That’s because a person deciding how much of the conduct at issue to engage in accounts for all of his conduct’s benefits, which ultimately inure to him, but only a portion of his conduct’s costs, some of which are borne by others.  Conversely, when an activity involves positive externalities, people tend to do too little of it.  In that case, they must bear all of the cost of their conduct but can capture only a portion of the benefit it produces.

Because most government interventions addressing externalities have been concerned with negative externalities (and because How to Regulate includes a separate chapter on public goods, which entail positive externalities), the book’s externalities chapter focuses on potential remedies for cost spillovers.  There are three main options, which are discussed below the fold. Continue Reading…