Archives For markets

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics); and Dirk Auer, (Senior Fellow of Law & Economics, ICLE)]

Back in 2012, Covidien, a large health care products company and medical device manufacturer, purchased Newport Medical Instruments, a small ventilator developer and manufacturer. (Covidien itself was subsequently purchased by Medtronic in 2015).

Eight years later, in the midst of the coronavirus pandemic, the New York Times has just published an article revisiting the Covidien/Newport transaction, and questioning whether it might have contributed to the current shortage of ventilators.

The article speculates that Covidien’s purchase of Newport, and the subsequent discontinuation of Newport’s “Aura” ventilator — which was then being developed by Newport under a government contract — delayed US government efforts to procure mechanical ventilators until the second half of 2020 — too late to treat the first wave of COVID-19 patients:

And then things suddenly veered off course. A multibillion-dollar maker of medical devices bought the small California company that had been hired to design the new machines. The project ultimately produced zero ventilators.

That failure delayed the development of an affordable ventilator by at least half a decade, depriving hospitals, states and the federal government of the ability to stock up.

* * *

Today, with the coronavirus ravaging America’s health care system, the nation’s emergency-response stockpile is still waiting on its first shipment.

The article has generated considerable interest not so much for what it suggests about government procurement policies or for its relevance to the ventilator shortages associated with the current pandemic, but rather for its purported relevance to ongoing antitrust debates and the arguments put forward by “antitrust populists” and others that merger enforcement in the US is dramatically insufficient. 

Only a single sentence in the article itself points to a possible antitrust story — and it does nothing more than report unsubstantiated speculation from unnamed “government officials” and rival companies: 

Government officials and executives at rival ventilator companies said they suspected that Covidien had acquired Newport to prevent it from building a cheaper product that would undermine Covidien’s profits from its existing ventilator business.

Nevertheless, and right on cue, various antitrust scholars quickly framed the deal as a so-called “killer acquisition” (see also here and here):

Unsurprisingly, politicians were also quick to jump on the bandwagon. David Cicilline, the powerful chairman of the House Antitrust Subcommittee, opined that:

And FTC Commissioner Rebecca Kelly Slaughter quickly called for a retrospective review of the deal:

The public reporting on this acquisition raises important questions about the review of this deal. We should absolutely be looking back to figure out what happened.

These “hot takes” raise a crucial issue. The New York Times story opened the door to a welter of hasty conclusions offered to support the ongoing narrative that antitrust enforcement has failed us — in this case quite literally at the cost of human lives. But are any of these claims actually supportable?

Unfortunately, the competitive realities of the mechanical ventilator industry, as well as a more clear-eyed view of what was likely going on with the failed government contract at the heart of the story, simply do not support the “killer acquisition” story.

What is a “killer acquisition”…?

Let’s take a step back. Because monopoly profits are, by definition, higher than joint duopoly profits (all else equal), economists have long argued that incumbents may find it profitable to acquire smaller rivals in order to reduce competition and increase their profits. More specifically, incumbents may be tempted to acquire would-be entrants in order to prevent them from introducing innovations that might hurt the incumbent’s profits.

For this theory to have any purchase, however, a number of conditions must hold. Most importantly, as Colleen Cunningham, Florian Ederer, and Song Ma put it in an influential paper

“killer acquisitions” can only occur when the entrepreneur’s project overlaps with the acquirer’s existing product…. [W]ithout any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur… because, without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

Moreover, the authors add that:

Successfully developing a new product draws consumer demand and profits away equally from all existing products. An acquiring incumbent is hurt more by such cannibalization when he is a monopolist (i.e., the new product draws demand away only from his own existing product) than when he already faces many other existing competitors (i.e., cannibalization losses are spread over many firms). As a result, as the number of existing competitors increases, the replacement effect decreases and the acquirer’s development decisions become more similar to those of the entrepreneur

Finally, the “killer acquisition” terminology is appropriate only when the incumbent chooses to discontinue its rival’s R&D project:

If incumbents face significant existing competition, acquired projects are not significantly more frequently discontinued than independent projects. Thus, more competition deters incumbents from acquiring and terminating the projects of potential future competitors, which leads to more competition in the future.

…And what isn’t a killer acquisition?

What is left out of this account of killer acquisitions is the age-old possibility that an acquirer purchases a rival precisely because it has superior know-how or a superior governance structure that enables it to realize greater return and more productivity than its target. In the case of a so-called killer acquisition, this means shutting down a negative ROI project and redeploying resources to other projects or other uses — including those that may not have any direct relation to the discontinued project. 

Such “synergistic” mergers are also — like allegedly “killer” mergers — likely to involve acquirers and targets in the same industry and with technological overlap between their R&D projects; it is in precisely these situations that the acquirer is likely to have better knowledge than the target’s shareholders that the target is undervalued because of poor governance rather than exogenous, environmental factors.  

In other words, whether an acquisition is harmful or not — as the epithet “killer” implies it is — depends on whether it is about reducing competition from a rival, on the one hand, or about increasing the acquirer’s competitiveness by putting resources to more productive use, on the other.

As argued below, it is highly unlikely that Covidien’s acquisition of Newport could be classified as a “killer acquisition.” There is thus nothing to suggest that the merger materially impaired competition in the mechanical ventilator market, or that it measurably affected the US’s efforts to fight COVID-19.

The market realities of the ventilator market and its implications for the “killer acquisition” story

1. The mechanical ventilator market is highly competitive

As explained above, “killer acquisitions” are less likely to occur in competitive markets. Yet the mechanical ventilator industry is extremely competitive. 

A number of reports conclude that there is significant competition in the industry. One source cites at least seven large producers. Another report cites eleven large players. And, in the words of another report:

Medical ventilators market competition is intense. 

The conclusion that the mechanical ventilator industry is highly competitive is further supported by the fact that the five largest producers combined reportedly hold only 50% of the market. In other words, available evidence suggests that none of these firms has anything close to a monopoly position. 

This intense competition, along with the small market shares of the merging firms, likely explains why the FTC declined to open an in-depth investigation into Covidien’s acquisition of Newport.

Similarly, following preliminary investigations, neither the FTC nor the European Commission saw the need for an in-depth look at the ventilator market when they reviewed Medtronic’s subsequent acquisition of Covidien (which closed in 2015). Although Medtronic did not produce any mechanical ventilators before the acquisition, authorities (particularly the European Commission) could nevertheless have analyzed that market if Covidien’s presumptive market share was particularly high. The fact that they declined to do so tends to suggest that the ventilator market was relatively unconcentrated.

2. The value of the merger was too small

A second strong reason to believe that Covidien’s purchase of Newport wasn’t a killer acquisition is the acquisition’s value of $103 million

Indeed, if it was clear that Newport was about to revolutionize the ventilator market, then Covidien would likely have been made to pay significantly more than $103 million to acquire it. 

As noted above, the crux of the “killer acquisition” theory is that incumbents can induce welfare-reducing acquisitions by offering to acquire their rivals for significantly more than the present value of their rivals’ expected profits. Because an incumbent undertaking a “killer” takeover expects to earn monopoly profits as a result of the transaction, it can offer a substantial premium and still profit from its investment. It is this basic asymmetry that drives the theory.

Indeed, as a recent article by Kevin Bryan and Erik Hovenkamp notes, an acquisition value out of line with current revenues may be an indicator of the significance of a pending acquisition in which enforcers may not actually know the value of the target’s underlying technology: 

[Where] a court may lack the expertise to [assess the commercial significance of acquired technology]…, the transaction value… may provide a reasonable proxy. Intuitively, if the startup is a relatively small company with relatively few sales to its name, then a very high acquisition price may reasonably suggest that the startup technology has significant promise.

The strategy only works, however, if the target firm’s shareholders agree that share value properly reflects only “normal” expected profits, and not that the target is poised to revolutionize its market with a uniquely low-cost or high-quality product. Relatively low acquisition prices relative to market size, therefore, tend to reflect low (or normal) expected profits, and a low perceived likelihood of radical innovations occurring.

We can apply this reasoning to Covidien’s acquisition of Newport: 

  • Precise and publicly available figures concerning the mechanical ventilator market are hard to come by. Nevertheless, one estimate finds that the global ventilator market was worth $2.715 billion in 2012. Another report suggests that the global market was worth $4.30 billion in 2018; still another that it was worth $4.58 billion in 2019.
  • As noted above, Covidien reported to the SEC that it paid $103 million to purchase Newport (a firm that produced only ventilators and apparently had no plans to branch out). 
  • For context, at the time of the acquisition Covidien had annual sales of $11.8 billion overall, and $743 million in sales of its existing “Airways and Ventilation Products.”

If the ventilator market was indeed worth billions of dollars per year, then the comparatively small $108 million paid by Covidien — small even relative to Covidien’s own share of the market — suggests that, at the time of the acquisition, it was unlikely that Newport was poised to revolutionize the market for mechanical ventilators (for instance, by successfully bringing its Aura ventilator to market). 

The New York Times article claimed that Newport’s ventilators would be sold (at least to the US government) for $3,000 — a substantial discount from the reportedly then-going rate of $10,000. If selling ventilators at this price seemed credible at the time, then Covidien — as well as Newport’s shareholders — knew that Newport was about to achieve tremendous cost savings, enabling it to offer ventilators not only to the the US government, but to purchasers around the world, at an irresistibly attractive — and profitable — price.

Ventilators at the time typically went for about $10,000 each, and getting the price down to $3,000 would be tough. But Newport’s executives bet they would be able to make up for any losses by selling the ventilators around the world.

“It would be very prestigious to be recognized as a supplier to the federal government,” said Richard Crawford, who was Newport’s head of research and development at the time. “We thought the international market would be strong, and there is where Newport would have a good profit on the product.”

If achievable, Newport thus stood to earn a substantial share of the profits in a multi-billion dollar industry. 

Of course, it is necessary to apply a probability to these numbers: Newport’s ventilator was not yet on the market, and had not yet received FDA approval. Nevertheless, if the Times’ numbers seemed credible at the time, then Covidien would surely have had to offer significantly more than $108 million in order to induce Newport’s shareholders to part with their shares.

Given the low valuation, however, as well as the fact that Newport produced other ventilators — and continues to do so to this day, there is no escaping the fact that everyone involved seemed to view Newport’s Aura ventilator as nothing more than a moonshot with, at best, a low likelihood of success. 

Curically, this same reasoning explains why it shouldn’t surprise anyone that the project was ultimately discontinued; recourse to a “killer acquisition” theory is hardly necessary.

3. Lessons from Covidien’s ventilator product decisions  

The killer acquisition claims are further weakened by at least four other important pieces of information: 

  1.  Covidien initially continued to develop Newport’s Aura ventilator, and continued to develop and sell Newport’s other ventilators.
  2. There was little overlap between Covidien and Newport’s ventilators — or, at the very least, they were highly differentiated
  3. Covidien appears to have discontinued production of its own portable ventilator in 2014
  4. The Newport purchase was part of a billion dollar series of acquisitions seemingly aimed at expanding Covidien’s in-hospital (i.e., not-portable) device portfolio

Covidien continued to develop and sell Newport’s ventilators

For a start, while the Aura line was indeed discontinued by Covidien, the timeline is important. The acquisition of Newport by Covidien was announced in March 2012, approved by the FTC in April of the same year, and the deal was closed on May 1, 2012.

However, as the FDA’s 510(k) database makes clear, Newport submitted documents for FDA clearance of the Aura ventilator months after its acquisition by Covidien (June 29, 2012, to be precise). And the Aura received FDA 510(k) clearance on November 9, 2012 — many months after the merger.

It would have made little sense for Covidien to invest significant sums in order to obtain FDA clearance for a project that it planned to discontinue (the FDA routinely requires parties to actively cooperate with it, even after 510(k) applications are submitted). 

Moreover, if Covidien really did plan to discreetly kill off the Aura ventilator, bungling the FDA clearance procedure would have been the perfect cover under which to do so. Yet that is not what it did.

Covidien continued to develop and sell Newport’s other ventilators

Second, and just as importantly, Covidien (and subsequently Medtronic) continued to sell Newport’s other ventilators. The Newport e360 and HT70 are still sold today. Covidien also continued to improve these products: it appears to have introduced an improved version of the Newport HT70 Plus ventilator in 2013.

If eliminating its competitor’s superior ventilators was the only goal of the merger, then why didn’t Covidien also eliminate these two products from its lineup, rather than continue to improve and sell them? 

At least part of the answer, as will be seen below, is that there was almost no overlap between Covidien and Newport’s product lines.

There was little overlap between Covidien’s and Newport’s ventilators

Third — and perhaps the biggest flaw in the killer acquisition story — is that there appears to have been very little overlap between Covidien and Newport’s ventilators. 

This decreases the likelihood that the merger was a killer acquisition. When two products are highly differentiated (or not substitutes at all), sales of the first are less likely to cannibalize sales of the other. As Florian Ederer and his co-authors put it:

Importantly, without any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur, neither to “Acquire to Kill” nor to “Acquire to Continue.” This is because without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

A quick search of the FDA’s 510(k) database reveals that Covidien has three approved lines of ventilators: the Puritan Bennett 980, 840, and 540 (apparently essentially the same as the PB560, the plans to which Medtronic recently made freely available in order to facilitate production during the current crisis). The same database shows that these ventilators differ markedly from Newport’s ventilators (particularly the Aura).

In particular, Covidien manufactured primarily traditional, invasive ICU ventilators (except for the PB540, which is potentially a substitute for the Newport HT70), while Newport made much-more-portable ventilators, suitable for home use (notably the Aura, HT50 and HT70 lines). 

Under normal circumstances, critical care and portable ventilators are not substitutes. As the WHO website explains, portable ventilators are:

[D]esigned to provide support to patients who do not require complex critical care ventilators.

A quick glance at Medtronic’s website neatly illustrates the stark differences between these two types of devices:

This is not to say that these devices do not have similar functionalities, or that they cannot become substitutes in the midst of a coronavirus pandemic. However, in normal times (as was the case when Covidien acquired Newport), hospitals likely did not view these devices as substitutes.

The conclusion that Covidien and Newport’s ventilator were not substitutes finds further support in documents and statements released at the time of the merger. For instance, Covidien’s CEO explained that:

This acquisition is consistent with Covidien’s strategy to expand into adjacencies and invest in product categories where it can develop a global competitive advantage.

And that:

Newport’s products and technology complement our current portfolio of respiratory solutions and will broaden our ventilation platform for patients around the world, particularly in emerging markets.

In short, the fact that almost all of Covidien and Newport’s products were not substitutes further undermines the killer acquisition story. It also tends to vindicate the FTC’s decision to rapidly terminate its investigation of the merger.

Covidien appears to have discontinued production of its own portable ventilator in 2014

Perhaps most tellingly: It appears that Covidien discontinued production of its own competing, portable ventilator, the Puritan Bennett 560, in 2014.

The product is reported on the company’s 2011, 2012 and 2013 annual reports:

Airway and Ventilation Products — airway, ventilator, breathing systems and inhalation therapy products. Key products include: the Puritan Bennett™ 840 line of ventilators; the Puritan Bennett™ 520 and 560 portable ventilator….

(The PB540 was launched in 2009; the updated PB560 in 2010. The PB520 was the EU version of the device, launched in 2011).

But in 2014, the PB560 was no longer listed among the company’s ventilator products:  

Airway & Ventilation, which primarily includes sales of airway, ventilator and inhalation therapy products and breathing systems.

Key airway & ventilation products include: the Puritan Bennett™ 840 and 980 ventilators, the Newport™ e360 and HT70 ventilators….

Nor — despite its March 31 and April 1 “open sourcing” of the specifications and software necessary to enable others to produce the PB560 — did Medtronic appear to have restarted production, and the company did not mention the device in its March 18 press release announcing its own, stepped-up ventilator production plans.

Surely if Covidien had intended to capture the portable ventilator market by killing off its competition it would have continued to actually sell its own, competing device. The fact that the only portable ventilators produced by Covidien by 2014 were those it acquired in the Newport deal strongly suggests that its objective in that deal was the acquisition and deployment of Newport’s viable and profitable technologies — not the abandonment of them. This, in turn, suggests that the Aura was not a viable and profitable technology.

(Admittedly we are unable to determine conclusively that either Covidien or Medtronic stopped producing the PB520/540/560 series of ventilators. But our research seems to indicate strongly that this is indeed the case).

Putting the Newport deal in context

Finally, although not dispositive, it seems important to put the Newport purchase into context. In the same year as it purchased Newport, Covidien paid more than a billion dollars to acquire five other companies, as well — all of them primarily producing in-hospital medical devices. 

That 2012 spending spree came on the heels of a series of previous medical device company acquisitions, apparently totally some four billion dollars. Although not exclusively so, the acquisitions undertaken by Covidien seem to have been primarily targeted at operating room and in-hospital monitoring and treatment — making the putative focus on cornering the portable (home and emergency) ventilator market an extremely unlikely one. 

By the time Covidien was purchased by Medtronic the deal easily cleared antitrust review because of the lack of overlap between the company’s products, with Covidien’s focusing predominantly on in-hospital, “diagnostic, surgical, and critical care” and Medtronic’s on post-acute care.

Newport misjudged the costs associated with its Aura project; Covidien was left to pick up the pieces

So why was the Aura ventilator discontinued?

Although it is almost impossible to know what motivated Covidien’s executives, the Aura ventilator project clearly suffered from many problems. 

The Aura project was intended to meet the requirements of the US government’s BARDA program (under the auspices of the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority). In short, the program sought to create a stockpile of next generation ventilators for emergency situations — including, notably, pandemics. The ventilator would thus have to be designed for events where

mass casualties may be expected, and when shortages of experienced health care providers with respiratory support training, and shortages of ventilators and accessory components may be expected.

The Aura ventilator would thus sit somewhere between Newport’s two other ventilators: the e360 which could be used in pediatric care (for newborns smaller than 5kg) but was not intended for home care use (or the extreme scenarios envisioned by the US government); and the more portable HT70 which could be used in home care environments, but not for newborns. 

Unfortunately, the Aura failed to achieve this goal. The FDA’s 510(k) clearance decision clearly states that the Aura was not intended for newborns:

The AURA family of ventilators is applicable for infant, pediatric and adult patients greater than or equal to 5 kg (11 lbs.).

A press release issued by Medtronic confirms that

the company was unable to secure FDA approval for use in neonatal populations — a contract requirement.

And the US Government RFP confirms that this was indeed an important requirement:

The device must be able to provide the same standard of performance as current FDA pre-market cleared portable ventilators and shall have the following additional characteristics or features: 

Flexibility to accommodate a wide patient population range from neonate to adult.

Newport also seems to have been unable to deliver the ventilator at the low price it had initially forecasted — a common problem for small companies and/or companies that undertake large R&D programs. It also struggled to complete the project within the agreed-upon deadlines. As the Medtronic press release explains:

Covidien learned that Newport’s work on the ventilator design for the Government had significant gaps between what it had promised the Government and what it could deliverboth in terms of being able to achieve the cost of production specified in the contract and product features and performance. Covidien management questioned whether Newport’s ability to complete the project as agreed to in the contract was realistic.

As Jason Crawford, an engineer and tech industry commentator, put it:

Projects fail all the time. “Supplier risk” should be a standard checkbox on anyone’s contingency planning efforts. This is even more so when you deliberately push the price down to 30% of the market rate. Newport did not even necessarily expect to be profitable on the contract.

The above is mostly Covidien’s “side” of the story, of course. But other pieces of evidence lend some credibility to these claims:

  • Newport agreed to deliver its Aura ventilator at a per unit cost of less than $3000. But, even today, this seems extremely ambitious. For instance, the WHO has estimated that portable ventilators cost between $3,300 and $13,500. If Newport could profitably sell the Aura at such a low price, then there was little reason to discontinue it (readers will recall the development of the ventilator was mostly complete when Covidien put a halt to the project).
  • Covidien/Newport is not the only firm to have struggled to offer suitable ventilators at such a low price. Philips (which took Newport’s place after the government contract fell through) also failed to achieve this low price. Rather than the $2,000 price sought in the initial RFP, Philips ultimately agreed to produce the ventilators for $3,280. But it has not yet been able to produce a single ventilator under the government contract at that price.
  • Covidien has repeatedly been forced to recall some of its other ventilators ( here, here and here) — including the Newport HT70. And rival manufacturers have also faced these types of issues (for example, here and here). 

Accordingly, Covidien may well have preferred to cut its losses on the already problem-prone Aura project, before similar issues rendered it even more costly. 

In short, while it is impossible to prove that these development issues caused Covidien to pull the plug on the Aura project, it is certainly plausible that they did. This further supports the hypothesis that Covidien’s acquisition of Newport was not a killer acquisition. 

Ending the Aura project might have been an efficient outcome

As suggested above, moreover, it is entirely possible that Covidien was better able to realize the poor prospects of Newport’s Aura project and also better organized to enable it to make the requisite decision to abandon the project.

A small company like Newport faces greater difficulties abandoning entrepreneurial projects because doing so can impair a privately held firm’s ability to raise funds for subsequent projects.

Moreover, the relatively large share of revue and reputation that Newport — worth $103 million in 2012, versus Covidien’s $11.8 billion — would have realized from fulfilling a substantial US government project could well have induced it to overestimate the project’s viability and to undertake excessive risk in the (vain) hope of bringing the project to fruition.  

While there is a tendency among antitrust scholars, enforcers, and practitioners to look for (and find…) antitrust-related rationales for mergers and other corporate conduct, it remains the case that most corporate control transactions (such as mergers) are driven by the acquiring firm’s expectation that it can manage more efficiently. As Henry G. Manne put it in his seminal article, Mergers and the Market for Corporate Control (1965): 

Since, in a world of uncertainty, profitable transactions will be entered into more often by those whose information is relatively more reliable, it should not surprise us that mergers within the same industry have been a principal form of changing corporate control. Reliable information is often available to suppliers and customers as well. Thus many vertical mergers may be of the control takeover variety rather than of the “foreclosure of competitors” or scale-economies type.

Of course, the same information that renders an acquiring firm in the same line of business knowledgeable enough to operate a target more efficiently could also enable it to effect a “killer acquisition” strategy. But the important point is that a takeover by a firm with a competing product line, after which the purchased company’s product line is abandoned, is at least as consistent with a “market for corporate control” story as with a “killer acquisition” story.

Indeed, as Florian Ederer himself noted with respect to the Covidien/Newport merger, 

“Killer acquisitions” can have a nefarious image, but killing off a rival’s product was probably not the main purpose of the transaction, Ederer said. He raised the possibility that Covidien decided to kill Newport’s innovation upon realising that the development of the devices would be expensive and unlikely to result in profits.

Concluding remarks

In conclusion, Covidien’s acquisition of Newport offers a cautionary tale about reckless journalism, “blackboard economics,” and government failure.

Reckless journalism because the New York Times clearly failed to do the appropriate due diligence for its story. Its journalists notably missed (or deliberately failed to mention) a number of critical pieces of information — such as the hugely important fact that most of Covidien’s and Newport’s products did not overlap, or the fact that there were numerous competitors in the highly competitive mechanical ventilator industry. 

And yet, that did not stop the authors from publishing their extremely alarming story, effectively suggesting that a small medical device merger materially contributed to the loss of many American lives.

The story also falls prey to what Ronald Coase called “blackboard economics”:

What is studied is a system which lives in the minds of economists but not on earth. 

Numerous commentators rushed to fit the story to their preconceived narratives, failing to undertake even a rudimentary examination of the underlying market conditions before they voiced their recriminations. 

The only thing that Covidien and Newport’s merger ostensibly had in common with the killer acquisition theory was the fact that a large firm purchased a small rival, and that the one of the small firm’s products was discontinued. But this does not even begin to meet the stringent conditions that must be fulfilled for the theory to hold water. Unfortunately, critics appear to have completely ignored all contradicting evidence. 

Finally, what the New York Times piece does offer is a chilling tale of government failure.

The inception of the US government’s BARDA program dates back to 2008 — twelve years before the COVID-19 pandemic hit the US. 

The collapse of the Aura project is no excuse for the fact that, more than six years after the Newport contract fell through, the US government still has not obtained the necessary ventilators. Questions should also be raised about the government’s decision to effectively put all of its eggs in the same basket — twice. If anything, it is thus government failure that was the real culprit. 

And yet the New York Times piece and the critics shouting “killer acquisition!” effectively give the US government’s abject failure here a free pass — all in the service of pursuing their preferred “killer story.”

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC).

[Nuechterlein: I represented AT&T in United States v. AT&T, Inc. (“AT&T/Time Warner”), and this essay is based in part on comments I prepared on AT&T’s behalf for the FTC’s recent public hearings on Competition and Consumer Protection in the 21st Century. All views expressed here are my own.]

The draft Vertical Merger Guidelines (“Draft Guidelines”) might well leave ordinary readers with the misimpression that U.S. antitrust authorities have suddenly come to view vertical integration with a jaundiced eye. Such readers might infer from the draft that vertical mergers are a minefield of potential competitive harms; that only sometimes do they “have the potential to create cognizable efficiencies”; and that such efficiencies, even when they exist, often are not “of a character and magnitude” to keep the merger from becoming “anticompetitive.” (Draft Guidelines § 8, at 9). But that impression would be impossible to square with the past forty years of U.S. enforcement policy and with exhaustive empirical work confirming the largely beneficial effects of vertical integration. 

The Draft Guidelines should reflect those realities and thus should incorporate genuine limiting principles — rooted in concerns about two-level market power — to cabin their highly speculative theories of harm. Without such limiting principles, the Guidelines will remain more a theoretical exercise in abstract issue-spotting than what they purport to be: a source of genuine guidance for the public

1. The presumptive benefits of vertical integration

Although the U.S. antitrust agencies (the FTC and DOJ) occasionally attach conditions to their approval of vertical mergers, they have litigated only one vertical merger case to judgment over the past forty years: AT&T/Time Warner. The reason for that paucity of cases is neither a lack of prosecutorial zeal nor a failure to understand “raising rivals’ costs” theories of harm. Instead, in the words of the FTC’s outgoing Bureau of Competition chief, Bruce Hoffman, the reason is the “broad consensus in competition policy and economic theory that the majority of vertical mergers are beneficial because they reduce costs and increase the intensity of interbrand competition.” 

Two exhaustive papers confirm that conclusion with hard empirical facts. The first was published in the International Journal of Industrial Organization in 2005 by FTC economists James Cooper, Luke Froeb, Dan O’Brien, and Michael Vita, who surveyed “multiple studies of vertical mergers and restraints” and “found only one example where vertical integration harmed consumers, and multiple examples where vertical integration unambiguously benefited consumers.” The second paper is a 2007 analysis in the Journal of Economic Literature co-authored by University of Michigan Professor Francine LaFontaine (who served from 2014 to 2015 as Director of the FTC’s Bureau of Economics) and Professor Margaret Slade of the University of British Columbia. Professors LaFontaine and Slade “did not have a particular conclusion in mind when [they] began to collect the evidence,” “tried to be fair in presenting the empirical regularities,” and were “therefore somewhat surprised at what the weight of the evidence is telling us.” They found that:

[U]nder most circumstances, profit-maximizing vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. (p. 680) 

Vertical mergers have this procompetitive track record for two basic reasons. First, by definition, they do not eliminate a competitor or increase market concentration in any market, and they pose fewer competitive concerns than horizontal mergers for that reason alone. Second, as Bruce Hoffman noted, “while efficiencies are often important in horizontal mergers, they are much more intrinsic to a vertical transaction” and “come with a more built-in likelihood of improving competition than horizontal mergers.”

It is widely accepted that vertical mergers often impose downward pricing pressure by eliminating double margins. Beyond that, as the Draft Guidelines observe (at § 8), vertical mergers can also play an indispensable role in “eliminate[ing] contracting frictions,” “streamlin[ing] production, inventory management, or distribution,” and “creat[ing] innovative products in ways that would have been hard to achieve through arm’s length contracts.”

2. Harm to competitors, harm to competition, and the need for limiting principles

Vertical mergers do often disadvantage rivals of the merged firm. For example, a distributor might merge with one of its key suppliers, achieve efficiencies through the combination, and pass some of the savings through to consumers in the form of lower prices. The firm’s distribution rivals will lose profits if they match the price cut and will lose market share to the merged firm if they do not. But that outcome obviously counts in favor of supporting, not opposing, the merger because it makes consumers better off and because “[t]he antitrust laws… were enacted for the protection of competition not competitors.” (Brunswick v Pueblo Bowl-O-Mat). 

This distinction between harm to competition and harm to competitors is fundamental to U.S. antitrust law. Yet key passages in the Draft Guidelines seem to blur this distinction

For example, one passage suggests that a vertical merger will be suspect if the merged firm might “chang[e] the terms of … rivals’ access” to an input, “one or more rivals would [then] lose sales,” and “some portion of those lost sales would be diverted to the merged firm.” Draft Guidelines § 5.a, at 4-5. Of course, the Guidelines’ drafters would never concede that they wish to vindicate the interests of competitors qua competitors. They would say that incremental changes in input prices, even if they do not structurally alter the competitive landscape, might nonetheless result in slightly higher overall consumer prices. And they would insist that speculation about such slight price effects should be sufficient to block a vertical merger. 

That was the precise theory of harm that DOJ pursued in AT&T/Time Warner, which involved a purely vertical merger between a video programmer (Time Warner) and a pay-TV distributor (AT&T/DirecTV). DOJ ultimately conceded that Time Warner was unlikely to withhold programming from (“foreclose”) AT&T’s pay-TV rivals. Instead, using a complex economic model, DOJ tried to show that the merger would increase Time Warner’s bargaining power and induce AT&T’s pay-TV rivals to pay somewhat higher rates for Time Warner programming, some portion of which the rivals would theoretically pass through to their own retail customers. At the same time, DOJ conceded that post-merger efficiencies would cause AT&T to lower its retail rates compared to the but-for world without the merger. DOJ nonetheless asserted that the aggregate effect of the pay-TV rivals’ price increases would exceed the aggregate effect of AT&T’s own price decrease. Without deciding whether such an effect would be sufficient to block the merger — a disputed legal issue — the courts ruled for the merging parties because DOJ could not substantiate its factual prediction that the merger would lead to programming price increases in the first place. 

It is unclear why DOJ picked this, of all cases, as its vehicle for litigating its first vertical merger case in decades. In an archetypal raising-rivals’-costs case, familiar from exclusive dealing law, the defendant forecloses its rivals by depriving them of a critical input or distribution channel and so marginalizes them in the process that it can profitably raise its own retail prices (see, e.g., McWane; Microsoft). AT&T/Time Warner could hardly have been further afield from that archetypal case. Again, DOJ conceded both that the merged firm would not foreclose rivals at all and that the merger would induce the firm to lower its retail prices below what it would charge if the merger were blocked. The draft Guidelines appear to double down on this odd strategy and portend more cases predicated on the same attenuated concerns about mere “chang[es in] the terms of … rivals’ access” to inputs, unaccompanied by any alleged structural changes in the competitive landscape

Bringing such cases would be a mistake, both tactically and doctrinally

“Changes in the terms of inputs” are a constant fact of life in nearly every market, with or without mergers, and have almost never aroused antitrust scrutiny. For example, whenever a firm enters into a long-term preferred-provider agreement with a new business partner in lieu of merging with it, the firm will, by definition, deal on less advantageous terms with the partner’s rivals than it otherwise would. That outcome is virtually never viewed as problematic, let alone unlawful, when it is accomplished through such long-term contracts. The government does not hire a team of economists to pore over documents, interview witnesses, and run abstruse models on whether the preferred-provider agreement can be projected, on balance, to produce incrementally higher downstream prices. There is no obvious reason why the government should treat such preferred provider arrangements differently if they arise through a vertical merger rather than a vertical contract — particularly given the draft Guidelines’ own acknowledgement that vertical mergers produce pro-consumer efficiencies that would be “hard to achieve through arm’s length contracts.” (Draft Guidelines § 8, at 9).

3. Towards a more useful safe harbor

Quoting then-Judge Breyer, the Supreme Court once noted that “antitrust rules ‘must be clear enough for lawyers to explain them to clients.’” That observation rings doubly true when applied to a document by enforcement officials purporting to “guide” business decisions. Firms contemplating a vertical merger need more than assurance that their merger will be cleared two years hence if their economists vanquish the government’s economists in litigation about the fine details of Nash bargaining theory. Instead, firms need true limiting principles, which identify the circumstances where any theory of harm would be so attenuated that litigating to block the merger is not worth the candle, particularly given the empirically validated presumption that most vertical mergers are pro-consumer.

The Agencies cannot meet the need for such limiting principles with the proposed “safe harbor” as it is currently phrased in the draft Guidelines: 

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.” (Draft Guidelines § 3, at 3). 

This anodyne assurance, with its arbitrarily low 20 percent thresholds phrased in the conjunctive, seems calculated more to preserve the agencies’ discretion than to provide genuine direction to industry. 

Nonetheless, the draft safe harbor does at least point in the right direction because it reflects a basic insight about two-level market power: vertical mergers are unlikely to create competitive concerns unless the merged firm will have, or could readily obtain, market power in both upstream and downstream markets. (See, e.g., Auburn News v. Providence Journal (“Where substantial market power is absent at any one product or distribution level, vertical integration will not have an anticompetitive effect.”)) This point parallels tying doctrine, which, like vertical merger analysis, addresses how vertical arrangements can affect competition across adjacent markets. As Justice O’Connor noted in Jefferson Parish, tying arrangements threaten competition 

primarily in the rare cases where power in the market for the tying product is used to create additional market power in the market for the tied product.… But such extension of market power is unlikely, or poses no threat of economic harm, unless…, [among other conditions, the seller has] power in the tying-product market… [and there is] a substantial threat that the tying seller will acquire market power in the tied-product market.

As this discussion suggests, the “20 percent” safe harbor in the draft Guidelines misses the mark in three respects

First, as a proxy for the absence of market power, 20 percent is too low: courts have generally refused to infer market power when the seller’s market share was below 30% and sometimes require higher shares. Of course, market share can be a highly overinclusive measure of market power, in that many firms with greater than a 30% share will lack market power. But it is nonetheless appropriate to use market share as a screen for further analysis.

Second, the draft’s safe harbor appears illogically in the conjunctive, applying only “where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.” That “and” should be an “or” because, again, vertical arrangements can be problematic only if a firm can use existing market power in a “related products” market to create or increase market power in the “relevant market.” 

Third, the phrase “the related product is used in less than 20 percent of the relevant market” is far too ambiguous to serve a useful role. For example, the “related product” sold by a merging upstream firm could be “used by” 100 percent of downstream buyers even though the firm’s sales account for only one percent of downstream purchases of that product if the downstream buyers multi-home — i.e., source their goods from many different sellers of substitutable products. The relevant proxy for “related product” market power is thus not how many customers “use” the merging firm’s product, but what percentage of overall sales of that product (including reasonable substitutes) it makes. 

Of course, this observation suggests that, when push comes to shove in litigation, the government must usually define two markets: not only (1) a “relevant market” in which competitive harm is alleged to occur, but also (2) an adjacent “related product” market in which the merged firm is alleged to have market power. Requiring such dual market definition is entirely appropriate. Ultimately, any raising-rivals’-costs theory relies on a showing that a vertically integrated firm has some degree of market power in a “related products” market when dealing with its rivals in an adjacent “relevant market.” And market definition is normally an inextricable component of a litigated market power analysis.

If these three changes are made, the safe harbor would read: 

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 30 percent, or the related product sold by one of the parties accounts for less than 30 percent of the overall sales of that related product, including reasonable substitutes.

Like all safe harbors, this one would be underinclusive (in that many mergers outside of the safe harbor are unobjectionable) and may occasionally be overinclusive. But this substitute language would be more useful as a genuine safe harbor because it would impose true limiting principles. And it would more accurately reflect the ways in which market power considerations should inform vertical analysis—whether of contractual arrangements or mergers.

The 2020 Draft Joint Vertical Merger Guidelines:

What’s in, what’s out — and do we need them anyway?

February 6 & 7, 2020

Welcome! We’re delighted to kick off our two-day blog symposium on the recently released Draft Joint Vertical Merger Guidelines from the DOJ Antitrust Division and the Federal Trade Commission. 

If adopted by the agencies, the guidelines would mark the first time since 1984 that U.S. federal antitrust enforcers have provided official, public guidance on their approach to the increasingly important issue of vertical merger enforcement. 

As previously noted, the release of the draft guidelines was controversial from the outset: The FTC vote to issue the draft was mixed, with a dissent from Commissioner Slaughter, an abstention from Commissioner Chopra, and a concurring statement from Commissioner Wilson.

As the antitrust community gears up to debate the draft guidelines, we have assembled an outstanding group of antitrust experts to weigh in with their initial thoughts on the guidelines here at Truth on the Market. We hope this symposium will provide important insights and stand as a useful resource for the ongoing discussion.

The scholars and practitioners who will participate in the symposium are:

  • Timothy J. Brennan (Professor, Public Policy and Economics, University of Maryland; former Chief Economist, FCC; former economist, DOJ Antitrust Division)
  • Steven Cernak (Partner, Bona Law PC; former antitrust counsel, GM)
  • Eric Fruits (Chief Economist, ICLE; Professor of Economics, Portland State University)
  • Herbert Hovenkamp (James G. Dinan University Professor of Law, University of Pennsylvania)
  • Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati) and Kenneth Edelson (Associate, Wilson Sonsini Goodrich & Rosati)
  • William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division) and Philip A. Giordano (Partner, Hughes Hubbard & Reed LLP)
  • Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics) and Kristian Stout (Associate Director, ICLE)
  • Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC)
  • Sharis A. Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division), Timothy Cornell (Partner, Clifford Chance), Brian Concklin (Counsel, Clifford Chance), and Michael Van Arsdall (Counsel, Clifford Chance)
  • Jan Rybnicek (Counsel, Freshfields Bruckhaus Deringer; former attorney adviser to Commissioner Joshua D. Wright, FTC)
  • Steven C. Salop (tent.) (Professor of Economics and Law, Georgetown University; former Associate Director, FTC Bureau of Economics)
  • Scott A. Sher (Partner, Wilson Sonsini Goodrich & Rosati) and Matthew McDonald (Associate, Wilson Sonsini Goodrich & Rosati)
  • Margaret Slade (Professor Emeritus, Vancouver School of Economics, University of British Columbia)
  • Gregory Werden (former Senior Economic Counsel, DOJ Antitrust Division) and Luke M. Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Vanderbilt University; former Chief Economist, DOJ Antitrust Division; former Chief Economist, FTC)
  • Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division)
  • Joshua D. Wright (University Professor of Law, George Mason University; former Commissioner, FTC), Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; former Assistant Attorney General, DOJ Antitrust Division), Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division), and John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics)

The first of the participants’ initial posts will appear momentarily, with additional posts appearing throughout the day today and tomorrow. We hope to generate a lively discussion, and expect some of the participants to offer follow up posts and/or comments on their fellow participants’ posts — please be sure to check back throughout the day and be sure to check the comments. We hope our readers will join us in the comments, as well.

Once again, welcome!

Truth on the Market is pleased to announce its next blog symposium:

The 2020 Draft Joint Vertical Merger Guidelines: What’s in, what’s out — and do we need them anyway?

February 6 & 7, 2020

Symposium background

On January 10, 2020, the DOJ Antitrust Division and the Federal Trade Commission released Draft Joint Vertical Merger Guidelines for public comment. If adopted by the agencies, the guidelines would mark the first time since 1984 that U.S. federal antitrust enforcers have provided official, public guidance on their approach to the increasingly important issue of vertical merger enforcement: 

“Challenging anticompetitive vertical mergers is essential to vigorous enforcement. The agencies’ vertical merger policy has evolved substantially since the issuance of the 1984 Non-Horizontal Merger Guidelines, and our guidelines should reflect the current enforcement approach. Greater transparency about the complex issues surrounding vertical mergers will benefit the business community, practitioners, and the courts,” said FTC Chairman Joseph J. Simons.

As evidenced by FTC Commissioner Slaughter’s dissent and FTC Commissioner Chopra’s abstention from the FTC’s vote to issue the draft guidelines, the topic is a contentious one. Similarly, as FTC Commissioner Wilson noted in her concurring statement, the recent FTC hearing on vertical mergers demonstrated that there is a vigorous dispute over what new guidelines should look like (or even if the 1984 Non-Horizontal Guidelines should be updated at all).

The agencies have announced two upcoming workshops to discuss the draft guidelines and have extended the comment period on the draft until February 26.

In advance of the workshops and the imminent discussions over the draft guidelines, we have asked a number of antitrust experts to weigh in here at Truth on the Market: to preview the coming debate by exploring the economic underpinnings of the draft guidelines and their likely role in the future of merger enforcement at the agencies, as well as what is in the guidelines and — perhaps more important — what is left out.  

Beginning the morning of Thursday, February 6, and continuing during business hours through Friday, February 7, Truth on the Market (TOTM) and the International Center for Law & Economics (ICLE) will host a blog symposium on the draft guidelines. 

Symposium participants

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues, including:

  • Timothy J. Brennan (Professor, Public Policy and Economics, University of Maryland; former Chief Economist, FCC; former economist, DOJ Antitrust Division)
  • Steven Cernak (Partner, Bona Law PC; former antitrust counsel, GM)
  • Luke M. Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Vanderbilt University; former Chief Economist, DOJ Antitrust Division; former Chief Economist, FTC)
  • Eric Fruits (Chief Economist, ICLE; Professor of Economics, Portland State University)
  • Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; former Assistant Attorney General, DOJ Antitrust Division)
  • Herbert Hovenkamp (James G. Dinan University Professor of Law, University of Pennsylvania)
  • Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati)
  • William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division)
  • Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division) 
  • Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics)
  • Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC)
  • Sharis A. Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division) 
  • Jan Rybnicek (Counsel, Freshfields Bruckhaus Deringer; former attorney adviser to Commissioner Joshua D. Wright, FTC)
  • Steven C. Salop (tent.) (Professor of Economics and Law, Georgetown University; former Associate Director, FTC Bureau of Economics)
  • Scott A. Sher (Partner, Wilson Sonsini Goodrich & Rosati)
  • Margaret Slade (Professor Emeritus, Vancouver School of Economics, University of British Columbia)
  • Kristian Stout (Associate Director, ICLE)
  • Gregory Werden (former Senior Economic Counsel, DOJ Antitrust Division)
  • Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division)
  • Joshua D. Wright (University Professor of Law, George Mason University; former Commissioner, FTC)
  • John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics)

We want to thank all of these excellent panelists for agreeing to take time away from their busy schedules to participate in this symposium. We are hopeful that this discussion will provide invaluable insight and perspective on the Draft Joint Vertical Merger Guidelines.

Look for the first posts starting Thursday, February 6!

In mid-November, the 50 state attorneys general (AGs) investigating Google’s advertising practices expanded their antitrust probe to include the company’s search and Android businesses. Texas Attorney General Ken Paxton, the lead on the case, was supportive of the development, but made clear that other states would manage the investigations of search and Android separately. While attorneys might see the benefit in splitting up search and advertising investigations, platforms like Google need to be understood as a coherent whole. If the state AGs case is truly concerned with the overall impact on the welfare of consumers, it will need to be firmly grounded in the unique economics of this platform.

Back in September, 50 state AGs, including those in Washington, DC and Puerto Rico, announced an investigation into Google. In opening the case, Paxton said that, “There is nothing wrong with a business becoming the biggest game in town if it does so through free market competition, but we have seen evidence that Google’s business practices may have undermined consumer choice, stifled innovation, violated users’ privacy, and put Google in control of the flow and dissemination of online information.” While the original document demands focused on Google’s “overarching control of online advertising markets and search traffic,” reports since then suggest that the primary investigation centers on online advertising.

Defining the market

Since the market definition is the first and arguably the most important step in an antitrust case, Paxton has tipped his hand and shown that the investigation is converging on the online ad market. Yet, he faltered when he wrote in The Wall Street Journal that, “Each year more than 90% of Google’s $117 billion in revenue comes from online advertising. For reference, the entire market for online advertising is around $130 billion annually.” As Patrick Hedger of the Competitive Enterprise Institute was quick to note, Paxton cited global revenue numbers and domestic advertising statistics. In reality, Google’s share of the online advertising market in the United States is 37 percent and is widely expected to fall.

When Google faced scrutiny by the Federal Trade Commission in 2013, the leaked staff report explained that “the Commission and the Department of Justice have previously found online ‘search advertising’ to be a distinct product market.” This finding, which dates from 2007, simply wouldn’t stand today. Facebook’s ad platform was launched in 2007 and has grown to become a major competitor to Google. Even more recently, Amazon has jumped into the space and independent platforms like Telaria, Rubicon Project, and The Trade Desk have all made inroads. In contrast to the late 2000s, advertisers now use about four different online ad platforms.

Moreover, the relationship between ad prices and industry concentration is complicated. In traditional economic analysis, fewer suppliers of a product generally translates into higher prices. In the online ad market, however, fewer advertisers means that ad buyers can efficiently target people through keywords. Because advertisers have access to superior information, research finds that more concentration tends to lead to lower search engine revenues. 

The addition of new fronts in the state AGs’ investigation could spell disaster for consumers. While search and advertising are distinct markets, it is the act of tying the two together that makes platforms like Google valuable to users and advertisers alike. Demand is tightly integrated between the two sides of the platform. Changes in user and advertiser preferences have far outsized effects on the overall platform value because each side responds to the other. If users experience an increase in price or a reduction in quality, then they will use the platform less or just log off completely. Advertisers see this change in users and react by reducing their demand for ad placements as well. When advertisers drop out, the total amount of content also recedes and users react once again. Economists call these relationships demand interdependencies. The demand on one side of the market is interdependent with demand on the other. Research on magazines, newspapers, and social media sites all support the existence of demand interdependencies. 

Economists David Evans and Richard Schmalensee, who were cited extensively in the Supreme Court case Ohio v. American Express, explained the importance of their integration into competition analysis, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. If they are ignored, then the typical analytical tools will yield incorrect assessments. Understanding these relationships makes the investigation all that more difficult.

The limits of remedies

Most likely, this current investigation will follow the trajectory of Microsoft in the 1990s when states did the legwork for a larger case brought by the Department of Justice (DoJ). The DoJ already has its own investigation into Google and will probably pull together all of the parties for one large suit. Google is also subject to a probe by the House of Representatives Judiciary Committee as well. What is certain is that Google will be saddled with years of regulatory scrutiny, but what remains unclear is what kind of changes the AGs are after.

The investigation might aim to secure behavioral changes, but these often come with a cost in platform industries. The European Commission, for example, got Google to change its practices with its Android operating system for mobile phones. Much like search and advertising, the Android ecosystem is a platform with cross subsidization and demand interdependencies between the various sides of the market. Because the company was ordered to stop tying the Android operating system to apps, manufacturers of phones and tablets now have to pay a licensing fee in Europe if they want Google’s apps and the Play Store. Remedies meant to change one side of the platform resulted in those relationships being unbundled. When regulators force cross subsidization to become explicit prices, consumers are the one who pay.

The absolute worst case scenario would be a break up of Google, which has been a centerpiece of Senator Elizabeth Warren’s presidential platform. As I explained last year, that would be a death warrant for the company:

[T]he value of both Facebook and Google comes in creating the platform, which combines users with advertisers. Before the integration of ad networks, the search engine industry was struggling and it was simply not a major player in the Internet ecosystem. In short, the search engines, while convenient, had no economic value. As Michael Moritz, a major investor of Google, said of those early years, “We really couldn’t figure out the business model. There was a period where things were looking pretty bleak.” But Google didn’t pave the way. Rather, Bill Gross at GoTo.com succeeded in showing everyone how advertising could work to build a business. Google founders Larry Page and Sergey Brin merely adopted the model in 2002 and by the end of the year, the company was profitable for the first time. Marrying the two sides of the platform created value. Tearing them apart will also destroy value.

The state AGs need to resist making this investigation into a political showcase. As Pew noted in documenting the rise of North Carolina Attorney General Josh Stein to national prominence, “What used to be a relatively high-profile position within a state’s boundaries has become a springboard for publicity across the country.” While some might cheer the opening of this investigation, consumer welfare needs to be front and center. To properly understand how consumer welfare might be impacted by an investigation, the state AGs need to take seriously the path already laid out by platform economics. For the sake of consumers, let’s hope they are up to the task. 

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

A recently published book, “Kochland – The Secret History of Koch Industries and Corporate Power in America” by Christopher Leonard, presents a gripping account of relentless innovation and the power of the entrepreneur to overcome adversity in pursuit of delivering superior goods and services to the market while also reaping impressive profits. It’s truly an inspirational American story.

Now, I should note that I don’t believe Mr. Leonard actually intended his book to be quite so complimentary to the Koch brothers and the vast commercial empire they built up over the past several decades. He includes plenty of material detailing, for example, their employees playing fast and loose with environmental protection rules, or their labor lawyers aggressively bargaining with unions, sometimes to the detriment of workers. And all of the stories he presents are supported by sympathetic emotional appeals through personal anecdotes. 

But, even then, many of the negative claims are part of a larger theme of Koch Industries progressively improving its business practices. One prominent example is how Koch Industries learned from its environmentally unfriendly past and implemented vigorous programs to ensure “10,000% compliance” with all federal and state environmental laws. 

What really stands out across most or all of the stories Leonard has to tell, however, is the deep appreciation that Charles Koch and his entrepreneurially-minded employees have for the fundamental nature of the market as an information discovery process. Indeed, Koch Industries has much in common with modern technology firms like Amazon in this respect — but decades before the information technology revolution made the full power of “Big Data” gathering and processing as obvious as it is today.

The impressive information operation of Koch Industries

Much of Kochland is devoted to stories in which Koch Industries’ ability to gather and analyze data from across its various units led to the production of superior results for the economy and consumers. For example,  

Koch… discovered that the National Parks Service published data showing the snow pack in the California mountains, data that Koch could analyze to determine how much water would be flowing in future months to generate power at California’s hydroelectric plants. This helped Koch predict with great accuracy the future supply of electricity and the resulting demand for natural gas.

Koch Industries was able to use this information to anticipate the amount of power (megawatt hours) it needed to deliver to the California power grid (admittedly, in a way that was somewhat controversial because of poorly drafted legislation relating to the new regulatory regime governing power distribution and resale in the state).

And, in 2000, while many firms in the economy were still riding the natural gas boom of the 90s, 

two Koch analysts and a reservoir engineer… accurately predicted a coming disaster that would contribute to blackouts along the West Coast, the bankruptcy of major utilities, and skyrocketing costs for many consumers.

This insight enabled Koch Industries to reap huge profits in derivatives trading, and it also enabled it to enter — and essentially rescue — a market segment crucial for domestic farmers: nitrogen fertilizer.

The market volatility in natural gas from the late 90s through early 00s wreaked havoc on the nitrogen fertilizer industry, for which natural gas is the primary input. Farmland — a struggling fertilizer producer — had progressively mismanaged its business over the preceding two decades by focusing on developing lines of business outside of its core competencies, including blithely exposing itself to the volatile natural gas market in pursuit of short-term profits. By the time it was staring bankruptcy in the face, there were no other companies interested in acquiring it. 

Koch’s analysts, however, noticed that many of Farmland’s key fertilizer plants were located in prime locations for reaching local farmers. Once the market improved, whoever controlled those key locations would be in a superior position for selling into the nitrogen fertilizer market. So, by utilizing the data it derived from its natural gas operations (both operating pipelines and storage facilities, as well as understanding the volatility of gas prices and availability through its derivatives trading operations), Koch Industries was able to infer that it could make substantial profits by rescuing this bankrupt nitrogen fertilizer business. 

Emblematic of Koch’s philosophy of only making long-term investments, 

[o]ver the next ten years, [Koch Industries] spent roughly $500 million to outfit the plants with new technology while streamlining production… Koch installed a team of fertilizer traders in the office… [t]he traders bought and sold supplies around the globe, learning more about fertilizer markets each day. Within a few years, Koch Fertilizer built a global distribution network. Koch founded a new company, called Koch Energy Services, which bought and sold natural gas supplies to keep the fertilizer plants stocked.

Thus, Koch Industries not only rescued midwest farmers from shortages that would have decimated their businesses, it invested heavily to ensure that production would continue to increase to meet future demand. 

As noted, this acquisition was consistent with the ethos of Koch Industries, which stressed thinking about investments as part of long-term strategies, in contrast to their “counterparties in the market [who] were obsessed with the near-term horizon.” This led Koch Industries to look at investments over a period measured in years or decades, an approach that allowed the company to execute very intricate investment strategies: 

If Koch thought there was going to be an oversupply of oil in the Gulf Coast region, for example, it might snap up leases on giant oil barges, knowing that when the oversupply hit, companies would be scrambling for extra storage space and willing to pay a premium for the leases that Koch bought on the cheap. This was a much safer way to execute the trade than simply shorting the price of oil—even if Koch was wrong about the supply glut, the downside was limited because Koch could still sell or use the barge leases and almost certainly break even.

Entrepreneurs, regulators, and the problem of incentives

All of these accounts and more in Kochland brilliantly demonstrate a principal salutary role of entrepreneurs in the market, which is to discover slack or scarce resources in the system and manage them in a way that they will be available for utilization when demand increases. Guaranteeing the presence of oil barges in the face of market turbulence, or making sure that nitrogen fertilizer is available when needed, is precisely the sort of result sound public policy seeks to encourage from firms in the economy. 

Government, by contrast — and despite its best intentions — is institutionally incapable of performing the same sorts of entrepreneurial activities as even very large private organizations like Koch Industries. The stories recounted in Kochland demonstrate this repeatedly. 

For example, in the oil tanker episode, Koch’s analysts relied on “huge amounts of data from outside sources” – including “publicly available data…like the federal reports that tracked the volume of crude oil being stored in the United States.” Yet, because that data was “often stale” owing to a rigid, periodic publication schedule, it lacked the specificity necessary for making precise interventions in markets. 

Koch’s analysts therefore built on that data using additional public sources, such as manifests from the Customs Service which kept track of the oil tanker traffic in US waters. Leveraging all of this publicly available data, Koch analysts were able to develop “a picture of oil shipments and flows that was granular in its specificity.”

Similarly, when trying to predict snowfall in the western US, and how that would affect hydroelectric power production, Koch’s analysts relied on publicly available weather data — but extended it with their own analytical insights to make it more suitable to fine-grained predictions. 

By contrast, despite decades of altering the regulatory scheme around natural gas production, transport and sales, and being highly involved in regulating all aspects of the process, the federal government could not even provide the data necessary to adequately facilitate markets. Koch’s energy analysts would therefore engage in various deals that sometimes would only break even — if it meant they could develop a better overall picture of the relevant markets: 

As was often the case at Koch, the company… was more interested in the real-time window that origination deals could provide into the natural gas markets. Just as in the early days of the crude oil markets, information about prices was both scarce and incredibly valuable. There were not yet electronic exchanges that showed a visible price of natural gas, and government data on sales were irregular and relatively slow to come. Every origination deal provided fresh and precise information about prices, supply, and demand.

In most, if not all, of the deals detailed in Kochland, government regulators had every opportunity to find the same trends in the publicly available data — or see the same deficiencies in the data and correct them. Given their access to the same data, government regulators could, in some imagined world, have developed policies to mitigate the effects of natural gas market collapses, handle upcoming power shortages, or develop a reliable supply of fertilizer to midwest farmers. But they did not. Indeed, because of the different sets of incentives they face (among other factors), in the real world, they cannot do so, despite their best intentions.

The incentive to innovate

This gets to the core problem that Hayek described concerning how best to facilitate efficient use of dispersed knowledge in such a way as to achieve the most efficient allocation and distribution of resources: 

The various ways in which the knowledge on which people base their plans is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy—or of designing an efficient economic system.

The question of how best to utilize dispersed knowledge in society can only be answered by considering who is best positioned to gather and deploy that knowledge. There is no fundamental objection to “planning”  per se, as Hayek notes. Indeed, in a complex society filled with transaction costs, there will need to be entities capable of internalizing those costs  — corporations or governments — in order to make use of the latent information in the system. The question is about what set of institutions, and what set of incentives governing those institutions, results in the best use of that latent information (and the optimal allocation and distribution of resources that follows from that). 

Armen Alchian captured the different incentive structures between private firms and government agencies well: 

The extent to which various costs and effects are discerned, measured and heeded depends on the institutional system of incentive-punishment for the deciders. One system of rewards-punishment may increase the extent to which some objectives are heeded, whereas another may make other goals more influential. Thus procedures for making or controlling decisions in one rewards-incentive system are not necessarily the “best” for some other system…

In the competitive, private, open-market economy, the wealth-survival prospects are not as strong for firms (or their employees) who do not heed the market’s test of cost effectiveness as for firms who do… as a result the market’s criterion is more likely to be heeded and anticipated by business people. They have personal wealth incentives to make more thorough cost-effectiveness calculations about the products they could produce …

In the government sector, two things are less effective. (1) The full cost and value consequences of decisions do not have as direct and severe a feedback impact on government employees as on people in the private sector. The costs of actions under their consideration are incomplete simply because the consequences of ignoring parts of the full span of costs are less likely to be imposed on them… (2) The effectiveness, in the sense of benefits, of their decisions has a different reward-inventive or feedback system … it is fallacious to assume that government officials are superhumans, who act solely with the national interest in mind and are never influenced by the consequences to their own personal position.

In short, incentives matter — and are a function of the institutional arrangement of the system. Given the same set of data about a scarce set of resources, over the long run, the private sector generally has stronger incentives to manage resources efficiently than does government. As Ludwig von Mises showed, moving those decisions into political hands creates a system of political preferences that is inherently inferior in terms of the production and distribution of goods and services.

Koch Industries: A model of entrepreneurial success

The market is not perfect, but no human institution is perfect. Despite its imperfections, the market provides the best system yet devised for fairly and efficiently managing the practically unlimited demands we place on our scarce resources. 

Kochland provides a valuable insight into the virtues of the market and entrepreneurs, made all the stronger by Mr. Leonard’s implied project of “exposing” the dark underbelly of Koch Industries. The book tells the bad tales, which I’m willing to believe are largely true. I would, frankly, be shocked if any large entity — corporation or government — never ran into problems with rogue employees, internal corporate dynamics gone awry, or a failure to properly understand some facet of the market or society that led to bad investments or policy. 

The story of Koch Industries — presented even as it is through the lens of a “secret history”  — is deeply admirable. It’s the story of a firm that not only learns from its own mistakes, as all firms must do if they are to survive, but of a firm that has a drive to learn in its DNA. Koch Industries relentlessly gathers information from the market, sometimes even to the exclusion of short-term profit. It eschews complex bureaucratic structures and processes, which encourages local managers to find opportunities and nimbly respond.

Kochland is a quick read that presents a gripping account of one of America’s corporate success stories. There is, of course, a healthy amount of material in the book covering the Koch brothers’ often controversial political activities. Nonetheless, even those who hate the Koch brothers on account of politics would do well to learn from the model of entrepreneurial success that Kochland cannot help but describe in its pages. 

Last week the Senate Judiciary Committee held a hearing, Intellectual Property and the Price of Prescription Drugs: Balancing Innovation and Competition, that explored whether changes to the pharmaceutical patent process could help lower drug prices.  The committee’s goal was to evaluate various legislative proposals that might facilitate the entry of cheaper generic drugs, while also recognizing that strong patent rights for branded drugs are essential to incentivize drug innovation.  As Committee Chairman Lindsey Graham explained:

One thing you don’t want to do is kill the goose who laid the golden egg, which is pharmaceutical development. But you also don’t want to have a system that extends unnecessarily beyond the ability to get your money back and make a profit, a patent system that drives up costs for the average consumer.

Several proposals that were discussed at the hearing have the potential to encourage competition in the pharmaceutical industry and help rein in drug prices. Below, I discuss these proposals, plus a few additional reforms. I also point out some of the language in the current draft proposals that goes a bit too far and threatens the ability of drug makers to remain innovative.  

1. Prevent brand drug makers from blocking generic companies’ access to drug samples. Some brand drug makers have attempted to delay generic entry by restricting generics’ access to the drug samples necessary to conduct FDA-required bioequivalence studies.  Some brand drug manufacturers have limited the ability of pharmacies or wholesalers to sell samples to generic companies or abused the REMS (Risk Evaluation Mitigation Strategy) program to refuse samples to generics under the auspices of REMS safety requirements.  The Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019 would allow potential generic competitors to bring an action in federal court for both injunctive relief and damages when brand companies block access to drug samples.  It also gives the FDA discretion to approve alternative REMS safety protocols for generic competitors that have been denied samples under the brand companies’ REMS protocol.  Although the vast majority of brand drug companies do not engage in the delay tactics addressed by CREATES, the Act would prevent the handful that do from thwarting generic competition.  Increased generic competition should, in turn, reduce drug prices.

2. Restrict abuses of FDA Citizen Petitions.  The citizen petition process was created as a way for individuals and community groups to flag legitimate concerns about drugs awaiting FDA approval.  However, critics claim that the process has been misused by some brand drug makers who file petitions about specific generic drugs in the hopes of delaying their approval and market entry.  Although FDA has indicated that citizens petitions rarely delay the approval of generic drugs, there have been a few drug makers, such as Shire ViroPharma, that have clearly abused the process and put unnecessary strain on FDA resources. The Stop The Overuse of Petitions and Get Affordable Medicines to Enter Soon (STOP GAMES) Act is intended to prevent such abuses.  The Act reinforces the FDA and FTC’s ability to crack down on petitions meant to lengthen the approval process of a generic competitor, which should deter abuses of the system that can occasionally delay generic entry.  However, lawmakers should make sure that adopted legislation doesn’t limit the ability of stakeholders (including drug makers that often know more about the safety of drugs than ordinary citizens) to raise serious concerns with the FDA. 

3. Curtail Anticompetitive Pay-for-Delay Settlements.  The Hatch-Waxman Act incentivizes generic companies to challenge brand drug patents by granting the first successful generic challenger a period of marketing exclusivity. Like all litigation, many of these patent challenges result in settlements instead of trials.  The FTC and some courts have concluded that these settlements can be anticompetitive when the brand companies agree to pay the generic challenger in exchange for the generic company agreeing to forestall the launch of their lower-priced drug. Settlements that result in a cash payment are a red flag for anti-competitive behavior, so pay-for-delay settlements have evolved to involve other forms of consideration instead.  As a result, the Preserve Access to Affordable Generics and Biosimilars Act aims to make an exchange of anything of value presumptively anticompetitive if the terms include a delay in research, development, manufacturing, or marketing of a generic drug. Deterring obvious pay-for-delay settlements will prevent delays to generic entry, making cheaper drugs available as quickly as possible to patients. 

However, the Act’s rigid presumption that an exchange of anything of value is presumptively anticompetitive may also prevent legitimate settlements that ultimately benefit consumers.  Brand drug makers should be allowed to compensate generic challengers to eliminate litigation risk and escape litigation expenses, and many settlements result in the generic drug coming to market before the expiration of the brand patent and possibly earlier than if there was prolonged litigation between the generic and brand company.  A rigid presumption of anticompetitive behavior will deter these settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place.  Indeed, the U.S. Supreme Court has declined to define these settlements as per se anticompetitive, and the FTC’s most recent agreement involving such settlements exempts several forms of exchanges of value.  Any adopted legislation should follow the FTC’s lead and recognize that some exchanges of value are pro-consumer and pro-competitive.

4. Restore the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers.  I have previously discussed how an unbalanced inter partes review (IPR) process for challenging patents threatens to stifle drug innovation.  Moreover, current law allows generic challengers to file duplicative claims in both federal court and through the IPR process.  And because IPR proceedings do not have a standing requirement, the process has been exploited  by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet. The added expense to drug makers of defending both duplicative claims and claims against challengers that are exploiting the system increases litigation costs, which may be passed on to consumers in the form of higher prices. 

The Hatch-Waxman Integrity Act (HWIA) is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It requires generic challengers to choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) or an IPR proceeding (which is faster and provides certain pro-challenger provisions). The HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.  By reducing duplicative litigation and the exploitation of the IPR process, the HWIA will reduce costs and strengthen innovation incentives for drug makers.  This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and make sure that consumers continue to have access to life-improving drugs.

5. Curb illegal product hopping and patent thickets.  Two drug maker tactics currently garnering a lot of attention are so-called “product hopping” and “patent thickets.”  At its worst, product hopping involves brand drug makers making minor changes to a drug nearing the end of its patent so that they gets a new patent on the slightly-tweaked drug, and then withdrawing the original drug from the market so that patients shift to the newly patented drug and pharmacists can’t substitute a generic version of the original drug.  Similarly, at their worst, patent thickets involve brand drug makers obtaining a web of patents on a single drug to extend the life of their exclusivity and make it too costly for other drug makers to challenge all of the patents associated with a drug.  The proposed Affordable Prescriptions for Patients Act of 2019 is meant to stop these abuses of the patent system, which would facilitate generic entry and help to lower drug prices.

However, the Act goes too far by also capturing many legitimate activities in its definitions. For example, the bill defines as anticompetitive product-hopping the selling of any improved version of a drug during a window which extends to a year after the launch of the first generic competitor.  Presently, to acquire a patent and FDA approval, the improved version of the drug must be different and innovative enough from the original drug, yet the Act would prevent the drug maker from selling such a product without satisfying a demanding three-pronged test before the FTC or a district court.  Similarly, the Act defines as anticompetitive patent thickets any new patents filed on a drug in the same general family as the original patent, and this presumption can only be rebutted by providing extensive evidence and satisfying demanding standards to the FTC or a district court.  As a result, the Act deters innovation activity that is at all related to an initial patent and, in doing so, ignores the fact that most important drug innovation is incremental innovation based on previous inventions.  Thus, the proposal should be redrafted to capture truly anticompetitive product hopping and patent thicket activity, while exempting behavior this is critical for drug innovation. 

Reforms that close loopholes in the current patent process should facilitate competition in the pharmaceutical industry and help to lower drug prices.  However, lawmakers need to be sure that they don’t restrict patent rights to the extent that they deter innovation because a significant body of research predicts that patients’ health outcomes will suffer as a result.

It might surprise some readers to learn that we think the Court’s decision today in Apple v. Pepper reaches — superficially — the correct result. But, we hasten to add, the Court’s reasoning (and, for that matter, the dissent’s) is completely wrongheaded. It would be an understatement to say that the Court reached the right result for the wrong reason; in fact, the Court’s analysis wasn’t even in the same universe as the correct reasoning.

Below we lay out our assessment, in a post drawn from an article forthcoming in the Nebraska Law Review.

Did the Court forget that, just last year, it decided Amex, the most significant U.S. antitrust case in ages?

What is most remarkable about the decision (and the dissent) is that neither mentions Ohio v. Amex, nor even the two-sided market context in which the transactions at issue take place.

If the decision in Apple v. Pepper hewed to the precedent established by Ohio v. Amex it would start with the observation that the relevant market analysis for the provision of app services is an integrated one, in which the overall effect of Apple’s conduct on both app users and app developers must be evaluated. A crucial implication of the Amex decision is that participants on both sides of a transactional platform are part of the same relevant market, and the terms of their relationship to the platform are inextricably intertwined.

Under this conception of the market, it’s difficult to maintain that either side does not have standing to sue the platform for the terms of its overall pricing structure, whether the specific terms at issue apply directly to that side or not. Both end users and app developers are “direct” purchasers from Apple — of different products, but in a single, inextricably interrelated market. Both groups should have standing.

More controversially, the logic of Amex also dictates that both groups should be able to establish antitrust injury — harm to competition — by showing harm to either group, as long as it establishes the requisite interrelatedness of the two sides of the market.

We believe that the Court was correct to decide in Amex that effects falling on the “other” side of a tightly integrated, two-sided market from challenged conduct must be addressed by the plaintiff in making its prima facie case. But that outcome entails a market definition that places both sides of such a market in the same relevant market for antitrust analysis.

As a result, the Court’s holding in Amex should also have required a finding in Apple v. Pepper that an app user on one side of the platform who transacts with an app developer on the other side of the market, in a transaction made possible and directly intermediated by Apple’s App Store, should similarly be deemed in the same market for standing purposes.

Relative to a strict construction of the traditional baseline, the former entails imposing an additional burden on two-sided market plaintiffs, while the latter entails a lessening of that burden. Whether the net effect is more or fewer successful cases in two-sided markets is unclear, of course. But from the perspective of aligning evidentiary and substantive doctrine with economic reality such an approach would be a clear improvement.

Critics accuse the Court of making antitrust cases unwinnable against two-sided market platforms thanks to Amex’s requirement that a prima facie showing of anticompetitive effect requires assessment of the effects on both sides of a two-sided market and proof of a net anticompetitive outcome. The critics should have been chastened by a proper decision in Apple v. Pepper. As it is, the holding (although not the reasoning) still may serve to undermine their fears.

But critics should have recognized that a necessary corollary of Amex’s “expanded” market definition is that, relative to previous standing doctrine, a greater number of prospective parties should have standing to sue.

More important, the Court in Apple v. Pepper should have recognized this. Although nominally limited to the indirect purchaser doctrine, the case presented the Court with an opportunity to grapple with this logical implication of its Amex decision. It failed to do so.

On the merits, it looks like Apple should win. But, for much the same reason, the Respondents in Apple v. Pepper should have standing

This does not, of course, mean that either party should win on the merits. Indeed, on the merits of the case, the Petitioner in Apple v. Pepper appears to have the stronger argument, particularly in light of Amex which (assuming the App Store is construed as some species of a two-sided “transaction” market) directs that Respondent has the burden of considering harms and efficiencies across both sides of the market.

At least on the basis of the limited facts as presented in the case thus far, Respondents have not remotely met their burden of proving anticompetitive effects in the relevant market.

The actual question presented in Apple v. Pepper concerns standing, not whether the plaintiffs have made out a viable case on the merits. Thus it may seem premature to consider aspects of the latter in addressing the former. But the structure of the market considered by the court should be consistent throughout its analysis.

Adjustments to standing in the context of two-sided markets must be made in concert with the nature of the substantive rule of reason analysis that will be performed in a case. The two doctrines are connected not only by the just demands for consistency, but by the error-cost framework of the overall analysis, which runs throughout the stages of an antitrust case.

Here, the two-sided markets approach in Amex properly understands that conduct by a platform has relevant effects on both sides of its interrelated two-sided market. But that stems from the actual economics of the platform; it is not merely a function of a judicial construct. It thus holds true at all stages of the analysis.

The implication for standing is that users on both sides of a two-sided platform may suffer similarly direct (or indirect) injury as a result of the platform’s conduct, regardless of the side to which that conduct is nominally addressed.

The consequence, then, of Amex’s understanding of the market is that more potential plaintiffs — specifically, plaintiffs on both sides of a two-sided market — may claim to suffer antitrust injury.

Why the myopic focus of the holding (and dissent) on Illinois Brick is improper: It’s about the market definition, stupid!

Moreover, because of the Amex understanding, the problem of analyzing the pass-through of damages at issue in Illinois Brick (with which the Court entirely occupies itself in Apple v. Pepper) is either mitigated or inevitable.

In other words, either the users on the different sides of a two-sided market suffer direct injury without pass-through under a proper definition of the relevant market, or else their interrelatedness is so strong that, complicated as it may be, the needs of substantive accuracy trump the administrative costs in sorting out the incidence of the costs, and courts cannot avoid them.

Illinois Brick’s indirect purchaser doctrine was designed for an environment in which the relationship between producers and consumers is mediated by a distributor in a direct, linear supply chain; it was not designed for platforms. Although the question presented in Apple v. Pepper is explicitly about whether the Illinois Brick “indirect purchaser” doctrine applies to the Apple App Store, that determination is contingent on the underlying product market definition (whether the product market is in fact well-specified by the parties and the court or not).

Particularly where intermediaries exist precisely to address transaction costs between “producers” and “consumers,” the platform services they provide may be central to the underlying claim in a way that the traditional direct/indirect filters — and their implied relevant markets — miss.

Further, the Illinois Brick doctrine was itself based not on the substantive necessity of cutting off liability evaluations at a particular level of distribution, but on administrability concerns. In particular, the Court was concerned with preventing duplicative recovery when there were many potential groups of plaintiffs, as well as preventing injustices that would occur if unknown groups of plaintiffs inadvertently failed to have their rights adequately adjudicated in absentia. It was also concerned with avoiding needlessly complicated damages calculations.

But, almost by definition, the tightly coupled nature of the two sides of a two-sided platform should mitigate the concerns about duplicative recovery and unknown parties. Moreover, much of the presumed complexity in damages calculations in a platform setting arise from the nature of the platform itself. Assessing and apportioning damages may be complicated, but such is the nature of complex commercial relationships — the same would be true, for example, of damages calculations between vertically integrated companies that transact simultaneously at multiple levels, or between cross-licensing patent holders/implementers. In fact, if anything, the judicial efficiency concerns in Illinois Brick point toward the increased importance of properly assessing the nature of the product or service of the platform in order to ensure that it accurately encompasses the entire relevant transaction.

Put differently, under a proper, more-accurate market definition, the “direct” and “indirect” labels don’t necessarily reflect either business or antitrust realities.

Where the Court in Apple v. Pepper really misses the boat is in its overly formalistic claim that the business model (and thus the product) underlying the complained-of conduct doesn’t matter:

[W]e fail to see why the form of the upstream arrangement between the manufacturer or supplier and the retailer should determine whether a monopolistic retailer can be sued by a downstream consumer who has purchased a good or service directly from the retailer and has paid a higher-than-competitive price because of the retailer’s unlawful monopolistic conduct.

But Amex held virtually the opposite:

Because “[l]egal presumptions that rest on formalistic distinctions rather than actual market realities are generally disfavored in antitrust law,” courts usually cannot properly apply the rule of reason without an accurate definition of the relevant market.

* * *

Price increases on one side of the platform likewise do not suggest anticompetitive effects without some evidence that they have increased the overall cost of the platform’s services. Thus, courts must include both sides of the platform—merchants and cardholders—when defining the credit-card market.

In the face of novel business conduct, novel business models, and novel economic circumstances, the degree of substantive certainty may be eroded, as may the reasonableness of the expectation that typical evidentiary burdens accurately reflect competitive harm. Modern technology — and particularly the platform business model endemic to many modern technology firms — presents a need for courts to adjust their doctrines in the face of such novel issues, even if doing so adds additional complexity to the analysis.

The unlearned market-definition lesson of the Eighth Circuit’s Campos v. Ticketmaster dissent

The Eight Circuit’s Campos v. Ticketmaster case demonstrates the way market definition shapes the application of the indirect purchaser doctrine. Indeed, the dissent in that case looms large in the Ninth Circuit’s decision in Apple v. Pepper. [Full disclosure: One of us (Geoff) worked on the dissent in Campos v. Ticketmaster as a clerk to Eighth Circuit judge Morris S. Arnold]

In Ticketmaster, the plaintiffs alleged that Ticketmaster abused its monopoly in ticket distribution services to force supracompetitve charges on concert venues — a practice that led to anticompetitive prices for concert tickets. Although not prosecuted as a two-sided market, the business model is strikingly similar to the App Store model, with Ticketmaster charging fees to venues and then facilitating ticket purchases between venues and concert goers.

As the dissent noted, however:

The monopoly product at issue in this case is ticket distribution services, not tickets.

Ticketmaster supplies the product directly to concert-goers; it does not supply it first to venue operators who in turn supply it to concert-goers. It is immaterial that Ticketmaster would not be supplying the service but for its antecedent agreement with the venues.

But it is quite relevant that the antecedent agreement was not one in which the venues bought some product from Ticketmaster in order to resell it to concert-goers.

More important, and more telling, is the fact that the entirety of the monopoly overcharge, if any, is borne by concert-goers.

In contrast to the situations described in Illinois Brick and the literature that the court cites, the venues do not pay the alleged monopoly overcharge — in fact, they receive a portion of that overcharge from Ticketmaster. (Emphasis added).

Thus, if there was a monopoly overcharge it was really borne entirely by concert-goers. As a result, apportionment — the complexity of which gives rise to the standard in Illinois Brick — was not a significant issue. And the antecedent transaction that allegedly put concertgoers in an indirect relationship with Ticketmaster is one in which Ticketmaster and concert venues divvied up the alleged monopoly spoils, not one in which the venues absorb their share of the monopoly overcharge.

The analogy to Apple v. Pepper is nearly perfect. Apple sits between developers on one side and consumers on the other, charges a fee to developers for app distribution services, and facilitates app sales between developers and users. It is possible to try to twist the market definition exercise to construe the separate contracts between developers and Apple on one hand, and the developers and consumers on the other, as some sort of complicated version of the classical manufacturing and distribution chains. But, more likely, it is advisable to actually inquire into the relevant factual differences that underpin Apple’s business model and adapt how courts consider market definition for two-sided platforms.

Indeed, Hanover Shoe and Illinois Brick were born out of a particular business reality in which businesses structured themselves in what are now classical production and distribution chains. The Supreme Court adopted the indirect purchaser rule as a prudential limitation on antitrust law in order to optimize the judicial oversight of such cases. It seems strangely nostalgic to reflexively try to fit new business methods into old legal analyses, when prudence and reality dictate otherwise.

The dissent in Ticketmaster was ahead of its time insofar as it recognized that the majority’s formal description of the ticket market was an artifact of viewing what was actually something much more like a ticket-services platform operated by Ticketmaster through the poor lens of the categories established decades earlier.

The Ticketmaster dissent’s observations demonstrate that market definition and antitrust standing are interrelated. It makes no sense to adhere to a restrictive reading of the latter if it connotes an economically improper understanding of the former. Ticketmaster provided an intermediary service — perhaps not quite a two-sided market, but something close — that stands outside a traditional manufacturing supply chain. Had it been offered by the venues themselves and bundled into the price of concert tickets there would be no question of injury and of standing (nor would market definition matter much, as both tickets and distribution services would be offered as a joint product by the same parties, in fixed proportions).

What antitrust standing doctrine should look like after Amex

There are some clear implications for antitrust doctrine that (should) follow from the preceding discussion.

A plaintiff has a choice to allege that a defendant operates either as a two-sided market or in a more traditional, linear chain during the pleading stage. If the plaintiff alleges a two-sided market, then, to demonstrate standing, it need only be shown that injury occurred to some subset of platform users with which the plaintiff is inextricably interrelated. The plaintiff would not need to demonstrate injury to him or herself, nor allege net harm, nor show directness.

In response, a defendant can contest standing by challenging the interrelatedness of the plaintiff and the group of platform users with whom the plaintiff claims interrelatedness. If the defendant does not challenge the allegation that it operates a two-sided market, it could not challenge standing by showing indirectness, that plaintiff had not alleged personal injury, or that plaintiff hasn’t alleged a net harm.

Once past a determination of standing, however, a plaintiff who pleads a two-sided market would not be able to later withdraw this allegation in order to lessen the attendant legal burdens.

If the court accepts that the defendant is operating a two-sided market, both parties would be required to frame their allegations and defenses in accordance with the nature of the two-sided market and thus the holding in Amex. This is critical because, whereas alleging a two-sided market may make it easier for plaintiffs to demonstrate standing, Amex’s requirement that net harm be demonstrated across interrelated sets of users makes it more difficult for plaintiffs to present a viable prima facie case. Further, defendants would not be barred from presenting efficiencies defenses based on benefits that interrelated users enjoy.

Conclusion: The Court in Apple v. Pepper should have acknowledged the implications of its holding in Amex

After Amex, claims against two-sided platforms might require more evidence to establish anticompetitive harm, but that business model also means that firms should open themselves up to a larger pool of potential plaintiffs. The legal principles still apply, but the relative importance of those principles to judicial outcomes shifts (or should shift) in line with the unique economic position of potential plaintiffs and defendants in a platform environment.

Whether a priori the net result is more or fewer cases and more or fewer victories for plaintiffs is not the issue; what matters is matching the legal and economic theory to the relevant facts in play. Moreover, decrying Amex as the end of antitrust was premature: the actual affect on injured parties can’t be known until other changes (like standing for a greater number of plaintiffs) are factored into the analysis. The Court’s holding in Apple v. Pepper sidesteps this issue entirely, and thus fails to properly move antitrust doctrine forward in line with its holding in Amex.

Of course, it’s entirely possible that platforms and courts might be inundated with expensive and difficult to manage lawsuits. There may be reasons of administrability for limiting standing (as Illinois Brick perhaps prematurely did for fear of the costs of courts’ managing suits). But then that should have been the focus of the Court’s decision.

Allowing standing in Apple v. Pepper permits exactly the kind of legal experimentation needed to enable the evolution of antitrust doctrine along with new business realities. But in some ways the Court reached the worst possible outcome. It announced a rule that permits more plaintiffs to establish standing, but it did not direct lower courts to assess standing within the proper analytical frame. Instead, it just expands standing in a manner unmoored from the economic — and, indeed, judicial — context. That’s not a recipe for the successful evolution of antitrust doctrine.

(The following is adapted from a recent ICLE Issue Brief on the flawed essential facilities arguments undergirding the EU competition investigations into Amazon’s marketplace that I wrote with Geoffrey Manne.  The full brief is available here. )

Amazon has largely avoided the crosshairs of antitrust enforcers to date. The reasons seem obvious: in the US it handles a mere 5% of all retail sales (with lower shares worldwide), and it consistently provides access to a wide array of affordable goods. Yet, even with Amazon’s obvious lack of dominance in the general retail market, the EU and some of its member states are opening investigations.

Commissioner Margarethe Vestager’s probe into Amazon, which came to light in September, centers on whether Amazon is illegally using its dominant position vis-á-vis third party merchants on its platforms in order to obtain data that it then uses either to promote its own direct sales, or else to develop competing products under its private label brands. More recently, Austria and Germany have launched separate investigations of Amazon rooted in many of the same concerns as those of the European Commission. The German investigation also focuses on whether the contractual relationships that third party sellers enter into with Amazon are unfair because these sellers are “dependent” on the platform.

One of the fundamental, erroneous assumptions upon which these cases are built is the alleged “essentiality” of the underlying platform or input. In truth, these sorts of cases are more often based on stories of firms that chose to build their businesses in a way that relies on a specific platform. In other words, their own decisions — from which they substantially benefited, of course — made their investments highly “asset specific” and thus vulnerable to otherwise avoidable risks. When a platform on which these businesses rely makes a disruptive move, the third parties cry foul, even though the platform was not — nor should have been — under any obligation to preserve the status quo on behalf of third parties.

Essential or not, that is the question

All three investigations are effectively premised on a version of an “essential facilities” theory — the claim that Amazon is essential to these companies’ ability to do business.

There are good reasons that the US has tightly circumscribed the scope of permissible claims invoking the essential facilities doctrine. Such “duty to deal” claims are “at or near the outer boundary” of US antitrust law. And there are good reasons why the EU and its member states should be similarly skeptical.

Characterizing one firm as essential to the operation of other firms is tricky because “[c]ompelling [innovative] firms to share the source of their advantage… may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” Further, the classification requires “courts to act as central planners, identifying the proper price, quantity, and other terms of dealing—a role for which they are ill-suited.”

The key difficulty is that alleged “essentiality” actually falls on a spectrum. On one end is something like a true monopoly utility that is actually essential to all firms that use its service as a necessary input; on the other is a firm that offers highly convenient services that make it much easier for firms to operate. This latter definition of “essentiality” describes firms like Google and Amazon, but it is not accurate to characterize such highly efficient and effective firms as truly “essential.” Instead, companies that choose to take advantage of the benefits such platforms offer, and to tailor their business models around them, suffer from an asset specificity problem.

Geoffrey Manne noted this problem in the context of the EU’s Google Shopping case:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

Third-party sellers that rely upon Amazon without a contingency plan are engaging in a calculated risk that, as business owners, they would typically be expected to manage.  The investigations by European authorities are based on the notion that antitrust law might require Amazon to remove that risk by prohibiting it from undertaking certain conduct that might raise costs for its third-party sellers.

Implications and extensions

In the full issue brief, we consider the tensions in EU law between seeking to promote innovation and protect the competitive process, on the one hand, and the propensity of EU enforcers to rely on essential facilities-style arguments on the other. One of the fundamental errors that leads EU enforcers in this direction is that they confuse the distribution channel of the Internet with an antitrust-relevant market definition.

A claim based on some flavor of Amazon-as-essential-facility should be untenable given today’s market realities because Amazon is, in fact, just one mode of distribution among many. Commerce on the Internet is still just commerce. The only thing preventing a merchant from operating a viable business using any of a number of different mechanisms is the transaction costs it would incur adjusting to a different mode of doing business. Casting Amazon’s marketplace as an essential facility insulates third-party firms from the consequences of their own decisions — from business model selection to marketing and distribution choices. Commerce is nothing new and offline distribution channels and retail outlets — which compete perfectly capably with online — are well developed. Granting retailers access to Amazon’s platform on artificially favorable terms is no more justifiable than granting them access to a supermarket end cap, or a particular unit at a shopping mall. There is, in other words, no business or economic justification for granting retailers in the time-tested and massive retail market an entitlement to use a particular mode of marketing and distribution just because they find it more convenient.

The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Continue Reading...

Drug makers recently announced their 2019 price increases on over 250 prescription drugs. As examples, AbbVie Inc. increased the price of the world’s top-selling drug Humira by 6.2 percent, and Hikma Pharmaceuticals increased the price of blood-pressure medication Enalaprilat by more than 30 percent. Allergan reported an average increase across its portfolio of drugs of 3.5 percent; although the drug maker is keeping most of its prices the same, it raised the prices on 27 drugs by 9.5 percent and on another 24 drugs by 4.9 percent. Other large drug makers, such as Novartis and Pfizer, will announce increases later this month.

So far, the number of price increases is significantly lower than last year when drug makers increased prices on more than 400 drugs.  Moreover, on the drugs for which prices did increase, the average price increase of 6.3 percent is only about half of the average increase for drugs in 2018. Nevertheless, some commentators have expressed indignation and President Trump this week summoned advisors to the White House to discuss the increases.  However, commentators and the administration should keep in mind what the price increases actually mean and the numerous players that are responsible for increasing drug prices. 

First, it is critical to emphasize the difference between drug list prices and net prices.  The drug makers recently announced increases in the list, or “sticker” prices, for many drugs.  However, the list price is usually very different from the net price that most consumers and/or their health plans actually pay, which depends on negotiated discounts and rebates.  For example, whereas drug list prices increased by an average of 6.9 percent in 2017, net drug prices after discounts and rebates increased by only 1.9 percent. The differential between the growth in list prices and net prices has persisted for years.  In 2016 list prices increased by 9 percent but net prices increased by 3.2 percent; in 2015 list prices increased by 11.9 percent but net prices increased by 2.4 percent, and in 2014 list price increases peaked at 13.5 percent but net prices increased by only 4.3 percent.

For 2019, the list price increases for many drugs will actually translate into very small increases in the net prices that consumers actually pay.  In fact, drug maker Allergan has indicated that, despite its increase in list prices, the net prices that patients actually pay will remain about the same as last year.

One might wonder why drug makers would bother to increase list prices if there’s little to no change in net prices.  First, at least 40 percent of the American prescription drug market is subject to some form of federal price control.  As I’ve previously explained, because these federal price controls generally require percentage rebates off of average drug prices, drug makers have the incentive to set list prices higher in order to offset the mandated discounts that determine what patients pay.

Further, as I discuss in a recent Article, the rebate arrangements between drug makers and pharmacy benefit managers (PBMs) under many commercial health plans create strong incentives for drug makers to increase list prices. PBMs negotiate rebates from drug manufacturers in exchange for giving the manufacturers’ drugs preferred status on a health plan’s formulary.  However, because the rebates paid to PBMs are typically a percentage of a drug’s list price, drug makers are compelled to increase list prices in order to satisfy PBMs’ demands for higher rebates. Drug makers assert that they are pressured to increase drug list prices out of fear that, if they do not, PBMs will retaliate by dropping their drugs from the formularies. The value of rebates paid to PBMs has doubled since 2012, with drug makers now paying $150 billion annually.  These rebates have grown so large that, today, the drug makers that actually invest in drug innovation and bear the risk of drug failures receive only 39 percent of the total spending on drugs, while 42 percent of the spending goes to these pharmaceutical middlemen.

Although a portion of the increasing rebate dollars may eventually find its way to patients in the form of lower co-pays, many patients still suffer from the list prices increases.  The 29 million Americans without drug plan coverage pay more for their medications when list prices increase. Even patients with insurance typically have cost-sharing obligations that require them to pay 30 to 40 percent of list prices.  Moreover, insured patients within the deductible phase of their drug plan pay the entire higher list price until they meet their deductible.  Higher list prices jeopardize patients’ health as well as their finances; as out-of-pocket costs for drugs increase, patients are less likely to adhere to their medication routine and more likely to abandon their drug regimen altogether.

Policymakers must realize that the current system of government price controls and distortive rebates creates perverse incentives for drug makers to continue increasing drug list prices. Pointing the finger at drug companies alone for increasing prices does not represent the problem at hand.