Archives For

The dystopian novel is a powerful literary genre. It has given us such masterpieces as Nineteen Eighty-Four, Brave New World, and Fahrenheit 451. Though these novels often shed light on the risks of contemporary society and the zeitgeist of the era in which they were written, they also almost always systematically overshoot the mark (intentionally or not) and severely underestimate the radical improvements that stem from the technologies (or other causes) that they fear.

But dystopias are not just a literary phenomenon; they are also a powerful force in policy circles. This is epitomized by influential publications such as The Club of Rome’s 1972 report The Limits of Growth, whose dire predictions of Malthusian catastrophe have largely failed to materialize.

In an article recently published in the George Mason Law Review, we argue that contemporary antitrust scholarship and commentary is similarly afflicted by dystopian thinking. In that respect, today’s antitrust pessimists have set their sights predominantly on the digital economy—”Big Tech” and “Big Data”—in the process of alleging a vast array of potential harms.

Scholars have notably argued that the data created and employed by the digital economy produces network effects that inevitably lead to tipping and to more concentrated markets (e.g., here and here). In other words, firms will allegedly accumulate insurmountable data advantages and thus thwart competitors for extended periods of time.

Some have gone so far as to argue that this threatens the very fabric of western democracy. For instance, parallels between the novel Nineteen Eighty-Four and the power of large digital platforms were plain to see when Epic Games launched an antitrust suit against Apple and its App Store in August 2020. The gaming company released a short video clip parodying Apple’s famous “1984” ad (which, upon its release, was itself widely seen as a critique of the tech incumbents of the time). Similarly, a piece in the New Statesman—titled “Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy”—concluded that:

Our lives and behaviour have been turned into profit for the Big Tech giants—and we meekly click ‘Accept.’ How did we sleepwalk into a world without privacy?

In our article, we argue that these fears are symptomatic of two different but complementary phenomena, which we refer to as “Antitrust Dystopia” and “Antitrust Nostalgia.”

Antitrust Dystopia is the pessimistic tendency among competition scholars and enforcers to assert that novel business conduct will cause technological advances to have unprecedented, anticompetitive consequences. This is almost always grounded in the belief that “this time is different”—that, despite the benign or positive consequences of previous, similar technological advances, this time those advances will have dire, adverse consequences absent enforcement to stave off abuse.

Antitrust Nostalgia is the biased assumption—often built into antitrust doctrine itself—that change is bad. Antitrust Nostalgia holds that, because a business practice has seemingly benefited competition before, changing it will harm competition going forward. Thus, antitrust enforcement is often skeptical of, and triggered by, various deviations from status quo conduct and relationships (i.e., “nonstandard” business arrangements) when change is, to a first approximation, the hallmark of competition itself.

Our article argues that these two worldviews are premised on particularly questionable assumptions about the way competition unfolds, in this case, in data-intensive markets.

The Case of Big Data Competition

The notion that digital markets are inherently more problematic than their brick-and-mortar counterparts—if there even is a meaningful distinction—is advanced routinely by policymakers, journalists, and other observers. The fear is that, left to their own devices, today’s dominant digital platforms will become all-powerful, protected by an impregnable “data barrier to entry.” Against this alarmist backdrop, nostalgic antitrust scholars have argued for aggressive antitrust intervention against the nonstandard business models and contractual arrangements that characterize these markets.

But as our paper demonstrates, a proper assessment of the attributes of data-intensive digital markets does not support either the dire claims or the proposed interventions.

  1. Data is information

One of the most salient features of the data created and consumed by online firms is that, jargon aside, it is just information. As with other types of information, it thus tends to have at least some traits usually associated with public goods (i.e., goods that are non-rivalrous in consumption and not readily excludable). As the National Bureau of Economic Research’s Catherine Tucker argues, data “has near-zero marginal cost of production and distribution even over long distances,” making it very difficult to exclude others from accessing it. Meanwhile, multiple economic agents can simultaneously use the same data, making it non-rivalrous in consumption.

As we explain in our paper, these features make the nature of modern data almost irreconcilable with the alleged hoarding and dominance that critics routinely associate with the tech industry.

2. Data is not scarce; expertise is

Another important feature of data is that it is ubiquitous. The predominant challenge for firms is not so much in obtaining data but, rather, in drawing useful insights from it. This has two important implications for antitrust policy.

First, although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As our survey of the empirical literature shows, data generally entails diminishing marginal returns:

Second, it is firms’ capabilities, rather than the data they own, that lead to success in the marketplace. Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around.

This dynamic can be seen at play in the early days of the search-engine market. In 2013, The Atlantic ran a piece titled “What the Web Looked Like Before Google.” By comparing the websites of Google and its rivals in 1998 (when Google Search was launched), the article shows how the current champion of search marked a radical departure from the status quo.

Even if it stumbled upon it by chance, Google immediately identified a winning formula for the search-engine market. It ditched the complicated classification schemes favored by its rivals and opted, instead, for a clean page with a single search box. This ensured that users could access the information they desired in the shortest possible amount of time—thanks, in part, to Google’s PageRank algorithm.

It is hardly surprising that Google’s rivals struggled to keep up with this shift in the search-engine industry. The theory of dynamic capabilities tells us that firms that have achieved success by indexing the web will struggle when the market rapidly moves toward a new paradigm (in this case, Google’s single search box and ten blue links). During the time it took these rivals to identify their weaknesses and repurpose their assets, Google kept on making successful decisions: notably, the introduction of Gmail, its acquisitions of YouTube and Android, and the introduction of Google Maps, among others.

Seen from this evolutionary perspective, Google thrived because its capabilities were perfect for the market at that time, while rivals were ill-adapted.

3.    Data as a byproduct of, and path to, platform monetization

Policymakers should also bear in mind that platforms often must go to great lengths in order to create data about their users—data that these same users often do not know about themselves. Under this framing, data is a byproduct of firms’ activity, rather than an input necessary for rivals to launch a business.

This is especially clear when one looks at the formative years of numerous online platforms. Most of the time, these businesses were started by entrepreneurs who did not own much data but, instead, had a brilliant idea for a service that consumers would value. Even if data ultimately played a role in the monetization of these platforms, it does not appear that it was necessary for their creation.

Data often becomes significant only at a relatively late stage in these businesses’ development. A quick glance at the digital economy is particularly revealing in this regard. Google and Facebook, in particular, both launched their platforms under the assumption that building a successful product would eventually lead to significant revenues.

It took five years from its launch for Facebook to start making a profit. Even at that point, when the platform had 300 million users, it still was not entirely clear whether it would generate most of its income from app sales or online advertisements. It was another three years before Facebook started to cement its position as one of the world’s leading providers of online ads. During this eight-year timespan, Facebook prioritized user growth over the monetization of its platform. The company appears to have concluded (correctly, it turns out) that once its platform attracted enough users, it would surely find a way to make itself highly profitable.

This might explain how Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace. And Facebook is no outlier. The list of companies that prevailed despite starting with little to no data (and initially lacking a data-dependent monetization strategy) is lengthy. Other examples include TikTok, Airbnb, Amazon, Twitter, PayPal, Snapchat, and Uber.

Those who complain about the unassailable competitive advantages enjoyed by companies with troves of data have it exactly backward. Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

We’ve Been Here Before: The Microsoft Antitrust Saga

Dystopian and nostalgic discussions concerning the power of successful technology firms are nothing new. Throughout recent history, there have been repeated calls for antitrust authorities to reign in these large companies. These calls for regulation have often led to increased antitrust scrutiny of some form. The Microsoft antitrust cases—which ran from the 1990s to the early 2010s on both sides of the Atlantic—offer a good illustration of the misguided “Antitrust Dystopia.”

In the mid-1990s, Microsoft was one of the most successful and vilified companies in America. After it obtained a commanding position in the desktop operating system market, the company sought to establish a foothold in the burgeoning markets that were developing around the Windows platform (many of which were driven by the emergence of the Internet). These included the Internet browser and media-player markets.

The business tactics employed by Microsoft to execute this transition quickly drew the ire of the press and rival firms, ultimately landing Microsoft in hot water with antitrust authorities on both sides of the Atlantic.

However, as we show in our article, though there were numerous calls for authorities to adopt a precautionary principle-type approach to dealing with Microsoft—and antitrust enforcers were more than receptive to these calls—critics’ worst fears never came to be.

This positive outcome is unlikely to be the result of the antitrust cases that were brought against Microsoft. In other words, the markets in which Microsoft operated seem to have self-corrected (or were misapprehended as competitively constrained) and, today, are generally seen as being unproblematic.

This is not to say that antitrust interventions against Microsoft were necessarily misguided. Instead, our critical point is that commentators and antitrust decisionmakers routinely overlooked or misinterpreted the existing and nonstandard market dynamics that ultimately prevented the worst anticompetitive outcomes from materializing. This is supported by several key factors.

First, the remedies that were imposed against Microsoft by antitrust authorities on both sides of the Atlantic were ultimately quite weak. It is thus unlikely that these remedies, by themselves, prevented Microsoft from dominating its competitors in adjacent markets.

Note that, if this assertion is wrong, and antitrust enforcement did indeed prevent Microsoft from dominating online markets, then there is arguably no need to reform the antitrust laws on either side of the Atlantic, nor even to adopt a particularly aggressive enforcement position. The remedies that were imposed on Microsoft were relatively localized. Accordingly, if antitrust enforcement did indeed prevent Microsoft from dominating other online markets, then it is antitrust enforcement’s deterrent effect that is to thank, and not the remedies actually imposed.

Second, Microsoft lost its bottleneck position. One of the biggest changes that took place in the digital space was the emergence of alternative platforms through which consumers could access the Internet. Indeed, as recently as January 2009, roughly 94% of all Internet traffic came from Windows-based computers. Just over a decade later, this number has fallen to about 31%. Android, iOS, and OS X have shares of roughly 41%, 16%, and 7%, respectively. Consumers can thus access the web via numerous platforms. The emergence of these alternatives reduced the extent to which Microsoft could use its bottleneck position to force its services on consumers in online markets.

Third, it is possible that Microsoft’s own behavior ultimately sowed the seeds of its relative demise. In particular, the alleged barriers to entry (rooted in nostalgic market definitions and skeptical analysis of “ununderstandable” conduct) that were essential to establishing the antitrust case against the company may have been pathways to entry as much as barriers.

Consider this error in the Microsoft court’s analysis of entry barriers: the court pointed out that new entrants faced a barrier that Microsoft didn’t face, in that Microsoft didn’t have to contend with a powerful incumbent impeding its entry by tying up application developers.

But while this may be true, Microsoft did face the absence of any developers at all, and had to essentially create (or encourage the creation of) businesses that didn’t previously exist. Microsoft thus created a huge positive externality for new entrants: existing knowledge and organizations devoted to software development, industry knowledge, reputation, awareness, and incentives for schools to offer courses. It could well be that new entrants, in fact, faced lower barriers with respect to app developers than did Microsoft when it entered.

In short, new entrants may face even more welcoming environments because of incumbents. This enabled Microsoft’s rivals to thrive.

Conclusion

Dystopian antitrust prophecies are generally doomed to fail, just like those belonging to the literary world. The reason is simple. While it is easy to identify what makes dominant firms successful in the present (i.e., what enables them to hold off competitors in the short term), it is almost impossible to conceive of the myriad ways in which the market could adapt. Indeed, it is today’s supra-competitive profits that spur the efforts of competitors.

Surmising that the economy will come to be dominated by a small number of successful firms is thus the same as believing that all market participants can be outsmarted by a few successful ones. This might occur in some cases or for some period of time, but as our article argues, it is bound to happen far less often than pessimists fear.

In short, dystopian scholars have not successfully made the case for precautionary antitrust. Indeed, the economic features of data make it highly unlikely that today’s tech giants could anticompetitively maintain their advantage for an indefinite amount of time, much less leverage this advantage in adjacent markets.

With this in mind, there is one dystopian novel that offers a fitting metaphor to end this Article. The Man in the High Castle tells the story of an alternate present, where Axis forces triumphed over the Allies during the second World War. This turns the dystopia genre on its head: rather than argue that the world is inevitably sliding towards a dark future, The Man in the High Castle posits that the present could be far worse than it is.

In other words, we should not take any of the luxuries we currently enjoy for granted. In the world of antitrust, critics routinely overlook that the emergence of today’s tech industry might have occurred thanks to, and not in spite of, existing antitrust doctrine. Changes to existing antitrust law should thus be dictated by a rigorous assessment of the various costs and benefits they would entail, rather than a litany of hypothetical concerns. The most recent wave of calls for antitrust reform have so far failed to clear this low bar.

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.

The European Commission has unveiled draft legislation (the Digital Services Act, or “DSA”) that would overhaul the rules governing the online lives of its citizens. The draft rules are something of a mixed bag. While online markets present important challenges for law enforcement, the DSA would significantly increase the cost of doing business in Europe and harm the very freedoms European lawmakers seek to protect. The draft’s newly proposed “Know Your Business Customer” (KYBC) obligations, however, will enable smoother operation of the liability regimes that currently apply to online intermediaries. 

These reforms come amid a rash of headlines about election meddling, misinformation, terrorist propaganda, child pornography, and other illegal and abhorrent content spread on digital platforms. These developments have galvanized debate about online liability rules.

Existing rules, codified in the e-Commerce Directive, largely absolve “passive” intermediaries that “play a neutral, merely technical and passive role” from liability for content posted by their users so long as they remove it once notified. “Active” intermediaries have more legal exposure. This regime isn’t perfect, but it seems to have served the EU well in many ways.

With its draft regulation, the European Commission is effectively arguing that those rules fail to address the legal challenges posed by the emergence of digital platforms. As the EC’s press release puts it:

The landscape of digital services is significantly different today from 20 years ago, when the eCommerce Directive was adopted. […]  Online intermediaries […] can be used as a vehicle for disseminating illegal content, or selling illegal goods or services online. Some very large players have emerged as quasi-public spaces for information sharing and online trade. They have become systemic in nature and pose particular risks for users’ rights, information flows and public participation.

Online platforms initially hoped lawmakers would agree to some form of self-regulation, but those hopes were quickly dashed. Facebook released a white paper this Spring proposing a more moderate path that would expand regulatory oversight to “ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression.” The proposed regime would not impose additional liability for harmful content posted by users, a position that Facebook and other internet platforms reiterated during congressional hearings in the United States.

European lawmakers were not moved by these arguments. EU Commissioner for Internal Market and Services Thierry Breton, among other European officials, dismissed Facebook’s proposal within hours of its publication, saying:

It’s not enough. It’s too slow, it’s too low in terms of responsibility and regulation.

Against this backdrop, the draft DSA includes many far-reaching measures: transparency requirements for recommender systems, content moderation decisions, and online advertising; mandated sharing of data with authorities and researchers; and numerous compliance measures that include internal audits and regular communication with authorities. Moreover, the largest online platforms—so-called “gatekeepers”—will have to comply with a separate regulation that gives European authorities new tools to “protect competition” in digital markets (the Digital Markets Act, or “DMA”).

The upshot is that, if passed into law, the draft rules will place tremendous burdens upon online intermediaries. This would be self-defeating. 

Excessive regulation or liability would significantly increase their cost of doing business, leading to significantly smaller networks and significantly increased barriers to access for many users. Stronger liability rules would also encourage platforms to play it safe, such as by quickly de-platforming and refusing access to anyone who plausibly engaged in illegal activity. Such an outcome would harm the very freedoms European lawmakers seek to protect.

This could prove particularly troublesome for small businesses that find it harder to compete against large platforms due to rising compliance costs. In effect, the new rules will increase barriers to entry, as has already been seen with the GDPR.

In the commission’s defense, some of the proposed reforms are more appealing. This is notably the case with the KYBC requirements, as well as the decision to leave most enforcement to member states, where services providers have their main establishments. The latter is likely to preserve regulatory competition among EU members to attract large tech firms, potentially limiting regulatory overreach. 

Indeed, while the existing regime does, to some extent, curb the spread of online crime, it does little for the victims of cybercrime, who ultimately pay the price. Removing illegal content doesn’t prevent it from reappearing in the future, sometimes on the same platform. Importantly, hosts have no obligation to provide the identity of violators to authorities, or even to know their identity in the first place. The result is an endless game of “whack-a-mole”: illegal content is taken down, but immediately reappears elsewhere. This status quo enables malicious users to upload illegal content, such as that which recently led card networks to cut all ties with Pornhub

Victims arguably need additional tools. This is what the Commission seeks to achieve with the DSA’s “traceability of traders” requirement, a form of KYBC:

Where an online platform allows consumers to conclude distance contracts with traders, it shall ensure that traders can only use its services to promote messages on or to offer products or services to consumers located in the Union if, prior to the use of its services, the online platform has obtained the following information: […]

Instead of rewriting the underlying liability regime—with the harmful unintended consequences that would likely entail—the draft DSA creates parallel rules that require platforms to better protect victims.

Under the proposed rules, intermediaries would be required to obtain the true identity of commercial clients (as opposed to consumers) and to sever ties with businesses that refuse to comply (rather than just take down their content). Such obligations would be, in effect, a version of the “Know Your Customer” regulations that exist in other industries. Banks, for example, are required to conduct due diligence to ensure scofflaws can’t use legitimate financial services to further criminal enterprises. It seems reasonable to expect analogous due diligence from the Internet firms that power so much of today’s online economy.

Obligations requiring platforms to vet their commercial relationships may seem modest, but they’re likely to enable more effective law enforcement against the actual perpetrators of online harms without diminishing platform’s innovation and the economic opportunity they provide (and that everyone agrees is worth preserving).

There is no silver bullet. Illegal activity will never disappear entirely from the online world, just as it has declined, but not vanished, from other walks of life. But small regulatory changes that offer marginal improvements can have a substantial effect. Modest informational requirements would weed out the most blatant crimes without overly burdening online intermediaries. In short, it would make the Internet a safer place for European citizens.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics); and Dirk Auer, (Senior Fellow of Law & Economics, ICLE)]

Back in 2012, Covidien, a large health care products company and medical device manufacturer, purchased Newport Medical Instruments, a small ventilator developer and manufacturer. (Covidien itself was subsequently purchased by Medtronic in 2015).

Eight years later, in the midst of the coronavirus pandemic, the New York Times has just published an article revisiting the Covidien/Newport transaction, and questioning whether it might have contributed to the current shortage of ventilators.

The article speculates that Covidien’s purchase of Newport, and the subsequent discontinuation of Newport’s “Aura” ventilator — which was then being developed by Newport under a government contract — delayed US government efforts to procure mechanical ventilators until the second half of 2020 — too late to treat the first wave of COVID-19 patients:

And then things suddenly veered off course. A multibillion-dollar maker of medical devices bought the small California company that had been hired to design the new machines. The project ultimately produced zero ventilators.

That failure delayed the development of an affordable ventilator by at least half a decade, depriving hospitals, states and the federal government of the ability to stock up.

* * *

Today, with the coronavirus ravaging America’s health care system, the nation’s emergency-response stockpile is still waiting on its first shipment.

The article has generated considerable interest not so much for what it suggests about government procurement policies or for its relevance to the ventilator shortages associated with the current pandemic, but rather for its purported relevance to ongoing antitrust debates and the arguments put forward by “antitrust populists” and others that merger enforcement in the US is dramatically insufficient. 

Only a single sentence in the article itself points to a possible antitrust story — and it does nothing more than report unsubstantiated speculation from unnamed “government officials” and rival companies: 

Government officials and executives at rival ventilator companies said they suspected that Covidien had acquired Newport to prevent it from building a cheaper product that would undermine Covidien’s profits from its existing ventilator business.

Nevertheless, and right on cue, various antitrust scholars quickly framed the deal as a so-called “killer acquisition” (see also here and here):

Unsurprisingly, politicians were also quick to jump on the bandwagon. David Cicilline, the powerful chairman of the House Antitrust Subcommittee, opined that:

And FTC Commissioner Rebecca Kelly Slaughter quickly called for a retrospective review of the deal:

The public reporting on this acquisition raises important questions about the review of this deal. We should absolutely be looking back to figure out what happened.

These “hot takes” raise a crucial issue. The New York Times story opened the door to a welter of hasty conclusions offered to support the ongoing narrative that antitrust enforcement has failed us — in this case quite literally at the cost of human lives. But are any of these claims actually supportable?

Unfortunately, the competitive realities of the mechanical ventilator industry, as well as a more clear-eyed view of what was likely going on with the failed government contract at the heart of the story, simply do not support the “killer acquisition” story.

What is a “killer acquisition”…?

Let’s take a step back. Because monopoly profits are, by definition, higher than joint duopoly profits (all else equal), economists have long argued that incumbents may find it profitable to acquire smaller rivals in order to reduce competition and increase their profits. More specifically, incumbents may be tempted to acquire would-be entrants in order to prevent them from introducing innovations that might hurt the incumbent’s profits.

For this theory to have any purchase, however, a number of conditions must hold. Most importantly, as Colleen Cunningham, Florian Ederer, and Song Ma put it in an influential paper

“killer acquisitions” can only occur when the entrepreneur’s project overlaps with the acquirer’s existing product…. [W]ithout any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur… because, without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

Moreover, the authors add that:

Successfully developing a new product draws consumer demand and profits away equally from all existing products. An acquiring incumbent is hurt more by such cannibalization when he is a monopolist (i.e., the new product draws demand away only from his own existing product) than when he already faces many other existing competitors (i.e., cannibalization losses are spread over many firms). As a result, as the number of existing competitors increases, the replacement effect decreases and the acquirer’s development decisions become more similar to those of the entrepreneur

Finally, the “killer acquisition” terminology is appropriate only when the incumbent chooses to discontinue its rival’s R&D project:

If incumbents face significant existing competition, acquired projects are not significantly more frequently discontinued than independent projects. Thus, more competition deters incumbents from acquiring and terminating the projects of potential future competitors, which leads to more competition in the future.

…And what isn’t a killer acquisition?

What is left out of this account of killer acquisitions is the age-old possibility that an acquirer purchases a rival precisely because it has superior know-how or a superior governance structure that enables it to realize greater return and more productivity than its target. In the case of a so-called killer acquisition, this means shutting down a negative ROI project and redeploying resources to other projects or other uses — including those that may not have any direct relation to the discontinued project. 

Such “synergistic” mergers are also — like allegedly “killer” mergers — likely to involve acquirers and targets in the same industry and with technological overlap between their R&D projects; it is in precisely these situations that the acquirer is likely to have better knowledge than the target’s shareholders that the target is undervalued because of poor governance rather than exogenous, environmental factors.  

In other words, whether an acquisition is harmful or not — as the epithet “killer” implies it is — depends on whether it is about reducing competition from a rival, on the one hand, or about increasing the acquirer’s competitiveness by putting resources to more productive use, on the other.

As argued below, it is highly unlikely that Covidien’s acquisition of Newport could be classified as a “killer acquisition.” There is thus nothing to suggest that the merger materially impaired competition in the mechanical ventilator market, or that it measurably affected the US’s efforts to fight COVID-19.

The market realities of the ventilator market and its implications for the “killer acquisition” story

1. The mechanical ventilator market is highly competitive

As explained above, “killer acquisitions” are less likely to occur in competitive markets. Yet the mechanical ventilator industry is extremely competitive. 

A number of reports conclude that there is significant competition in the industry. One source cites at least seven large producers. Another report cites eleven large players. And, in the words of another report:

Medical ventilators market competition is intense. 

The conclusion that the mechanical ventilator industry is highly competitive is further supported by the fact that the five largest producers combined reportedly hold only 50% of the market. In other words, available evidence suggests that none of these firms has anything close to a monopoly position. 

This intense competition, along with the small market shares of the merging firms, likely explains why the FTC declined to open an in-depth investigation into Covidien’s acquisition of Newport.

Similarly, following preliminary investigations, neither the FTC nor the European Commission saw the need for an in-depth look at the ventilator market when they reviewed Medtronic’s subsequent acquisition of Covidien (which closed in 2015). Although Medtronic did not produce any mechanical ventilators before the acquisition, authorities (particularly the European Commission) could nevertheless have analyzed that market if Covidien’s presumptive market share was particularly high. The fact that they declined to do so tends to suggest that the ventilator market was relatively unconcentrated.

2. The value of the merger was too small

A second strong reason to believe that Covidien’s purchase of Newport wasn’t a killer acquisition is the acquisition’s value of $103 million

Indeed, if it was clear that Newport was about to revolutionize the ventilator market, then Covidien would likely have been made to pay significantly more than $103 million to acquire it. 

As noted above, the crux of the “killer acquisition” theory is that incumbents can induce welfare-reducing acquisitions by offering to acquire their rivals for significantly more than the present value of their rivals’ expected profits. Because an incumbent undertaking a “killer” takeover expects to earn monopoly profits as a result of the transaction, it can offer a substantial premium and still profit from its investment. It is this basic asymmetry that drives the theory.

Indeed, as a recent article by Kevin Bryan and Erik Hovenkamp notes, an acquisition value out of line with current revenues may be an indicator of the significance of a pending acquisition in which enforcers may not actually know the value of the target’s underlying technology: 

[Where] a court may lack the expertise to [assess the commercial significance of acquired technology]…, the transaction value… may provide a reasonable proxy. Intuitively, if the startup is a relatively small company with relatively few sales to its name, then a very high acquisition price may reasonably suggest that the startup technology has significant promise.

The strategy only works, however, if the target firm’s shareholders agree that share value properly reflects only “normal” expected profits, and not that the target is poised to revolutionize its market with a uniquely low-cost or high-quality product. Relatively low acquisition prices relative to market size, therefore, tend to reflect low (or normal) expected profits, and a low perceived likelihood of radical innovations occurring.

We can apply this reasoning to Covidien’s acquisition of Newport: 

  • Precise and publicly available figures concerning the mechanical ventilator market are hard to come by. Nevertheless, one estimate finds that the global ventilator market was worth $2.715 billion in 2012. Another report suggests that the global market was worth $4.30 billion in 2018; still another that it was worth $4.58 billion in 2019.
  • As noted above, Covidien reported to the SEC that it paid $103 million to purchase Newport (a firm that produced only ventilators and apparently had no plans to branch out). 
  • For context, at the time of the acquisition Covidien had annual sales of $11.8 billion overall, and $743 million in sales of its existing “Airways and Ventilation Products.”

If the ventilator market was indeed worth billions of dollars per year, then the comparatively small $108 million paid by Covidien — small even relative to Covidien’s own share of the market — suggests that, at the time of the acquisition, it was unlikely that Newport was poised to revolutionize the market for mechanical ventilators (for instance, by successfully bringing its Aura ventilator to market). 

The New York Times article claimed that Newport’s ventilators would be sold (at least to the US government) for $3,000 — a substantial discount from the reportedly then-going rate of $10,000. If selling ventilators at this price seemed credible at the time, then Covidien — as well as Newport’s shareholders — knew that Newport was about to achieve tremendous cost savings, enabling it to offer ventilators not only to the the US government, but to purchasers around the world, at an irresistibly attractive — and profitable — price.

Ventilators at the time typically went for about $10,000 each, and getting the price down to $3,000 would be tough. But Newport’s executives bet they would be able to make up for any losses by selling the ventilators around the world.

“It would be very prestigious to be recognized as a supplier to the federal government,” said Richard Crawford, who was Newport’s head of research and development at the time. “We thought the international market would be strong, and there is where Newport would have a good profit on the product.”

If achievable, Newport thus stood to earn a substantial share of the profits in a multi-billion dollar industry. 

Of course, it is necessary to apply a probability to these numbers: Newport’s ventilator was not yet on the market, and had not yet received FDA approval. Nevertheless, if the Times’ numbers seemed credible at the time, then Covidien would surely have had to offer significantly more than $108 million in order to induce Newport’s shareholders to part with their shares.

Given the low valuation, however, as well as the fact that Newport produced other ventilators — and continues to do so to this day, there is no escaping the fact that everyone involved seemed to view Newport’s Aura ventilator as nothing more than a moonshot with, at best, a low likelihood of success. 

Curically, this same reasoning explains why it shouldn’t surprise anyone that the project was ultimately discontinued; recourse to a “killer acquisition” theory is hardly necessary.

3. Lessons from Covidien’s ventilator product decisions  

The killer acquisition claims are further weakened by at least four other important pieces of information: 

  1.  Covidien initially continued to develop Newport’s Aura ventilator, and continued to develop and sell Newport’s other ventilators.
  2. There was little overlap between Covidien and Newport’s ventilators — or, at the very least, they were highly differentiated
  3. Covidien appears to have discontinued production of its own portable ventilator in 2014
  4. The Newport purchase was part of a billion dollar series of acquisitions seemingly aimed at expanding Covidien’s in-hospital (i.e., not-portable) device portfolio

Covidien continued to develop and sell Newport’s ventilators

For a start, while the Aura line was indeed discontinued by Covidien, the timeline is important. The acquisition of Newport by Covidien was announced in March 2012, approved by the FTC in April of the same year, and the deal was closed on May 1, 2012.

However, as the FDA’s 510(k) database makes clear, Newport submitted documents for FDA clearance of the Aura ventilator months after its acquisition by Covidien (June 29, 2012, to be precise). And the Aura received FDA 510(k) clearance on November 9, 2012 — many months after the merger.

It would have made little sense for Covidien to invest significant sums in order to obtain FDA clearance for a project that it planned to discontinue (the FDA routinely requires parties to actively cooperate with it, even after 510(k) applications are submitted). 

Moreover, if Covidien really did plan to discreetly kill off the Aura ventilator, bungling the FDA clearance procedure would have been the perfect cover under which to do so. Yet that is not what it did.

Covidien continued to develop and sell Newport’s other ventilators

Second, and just as importantly, Covidien (and subsequently Medtronic) continued to sell Newport’s other ventilators. The Newport e360 and HT70 are still sold today. Covidien also continued to improve these products: it appears to have introduced an improved version of the Newport HT70 Plus ventilator in 2013.

If eliminating its competitor’s superior ventilators was the only goal of the merger, then why didn’t Covidien also eliminate these two products from its lineup, rather than continue to improve and sell them? 

At least part of the answer, as will be seen below, is that there was almost no overlap between Covidien and Newport’s product lines.

There was little overlap between Covidien’s and Newport’s ventilators

Third — and perhaps the biggest flaw in the killer acquisition story — is that there appears to have been very little overlap between Covidien and Newport’s ventilators. 

This decreases the likelihood that the merger was a killer acquisition. When two products are highly differentiated (or not substitutes at all), sales of the first are less likely to cannibalize sales of the other. As Florian Ederer and his co-authors put it:

Importantly, without any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur, neither to “Acquire to Kill” nor to “Acquire to Continue.” This is because without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

A quick search of the FDA’s 510(k) database reveals that Covidien has three approved lines of ventilators: the Puritan Bennett 980, 840, and 540 (apparently essentially the same as the PB560, the plans to which Medtronic recently made freely available in order to facilitate production during the current crisis). The same database shows that these ventilators differ markedly from Newport’s ventilators (particularly the Aura).

In particular, Covidien manufactured primarily traditional, invasive ICU ventilators (except for the PB540, which is potentially a substitute for the Newport HT70), while Newport made much-more-portable ventilators, suitable for home use (notably the Aura, HT50 and HT70 lines). 

Under normal circumstances, critical care and portable ventilators are not substitutes. As the WHO website explains, portable ventilators are:

[D]esigned to provide support to patients who do not require complex critical care ventilators.

A quick glance at Medtronic’s website neatly illustrates the stark differences between these two types of devices:

This is not to say that these devices do not have similar functionalities, or that they cannot become substitutes in the midst of a coronavirus pandemic. However, in normal times (as was the case when Covidien acquired Newport), hospitals likely did not view these devices as substitutes.

The conclusion that Covidien and Newport’s ventilator were not substitutes finds further support in documents and statements released at the time of the merger. For instance, Covidien’s CEO explained that:

This acquisition is consistent with Covidien’s strategy to expand into adjacencies and invest in product categories where it can develop a global competitive advantage.

And that:

Newport’s products and technology complement our current portfolio of respiratory solutions and will broaden our ventilation platform for patients around the world, particularly in emerging markets.

In short, the fact that almost all of Covidien and Newport’s products were not substitutes further undermines the killer acquisition story. It also tends to vindicate the FTC’s decision to rapidly terminate its investigation of the merger.

Covidien appears to have discontinued production of its own portable ventilator in 2014

Perhaps most tellingly: It appears that Covidien discontinued production of its own competing, portable ventilator, the Puritan Bennett 560, in 2014.

The product is reported on the company’s 2011, 2012 and 2013 annual reports:

Airway and Ventilation Products — airway, ventilator, breathing systems and inhalation therapy products. Key products include: the Puritan Bennett™ 840 line of ventilators; the Puritan Bennett™ 520 and 560 portable ventilator….

(The PB540 was launched in 2009; the updated PB560 in 2010. The PB520 was the EU version of the device, launched in 2011).

But in 2014, the PB560 was no longer listed among the company’s ventilator products:  

Airway & Ventilation, which primarily includes sales of airway, ventilator and inhalation therapy products and breathing systems.

Key airway & ventilation products include: the Puritan Bennett™ 840 and 980 ventilators, the Newport™ e360 and HT70 ventilators….

Nor — despite its March 31 and April 1 “open sourcing” of the specifications and software necessary to enable others to produce the PB560 — did Medtronic appear to have restarted production, and the company did not mention the device in its March 18 press release announcing its own, stepped-up ventilator production plans.

Surely if Covidien had intended to capture the portable ventilator market by killing off its competition it would have continued to actually sell its own, competing device. The fact that the only portable ventilators produced by Covidien by 2014 were those it acquired in the Newport deal strongly suggests that its objective in that deal was the acquisition and deployment of Newport’s viable and profitable technologies — not the abandonment of them. This, in turn, suggests that the Aura was not a viable and profitable technology.

(Admittedly we are unable to determine conclusively that either Covidien or Medtronic stopped producing the PB520/540/560 series of ventilators. But our research seems to indicate strongly that this is indeed the case).

Putting the Newport deal in context

Finally, although not dispositive, it seems important to put the Newport purchase into context. In the same year as it purchased Newport, Covidien paid more than a billion dollars to acquire five other companies, as well — all of them primarily producing in-hospital medical devices. 

That 2012 spending spree came on the heels of a series of previous medical device company acquisitions, apparently totally some four billion dollars. Although not exclusively so, the acquisitions undertaken by Covidien seem to have been primarily targeted at operating room and in-hospital monitoring and treatment — making the putative focus on cornering the portable (home and emergency) ventilator market an extremely unlikely one. 

By the time Covidien was purchased by Medtronic the deal easily cleared antitrust review because of the lack of overlap between the company’s products, with Covidien’s focusing predominantly on in-hospital, “diagnostic, surgical, and critical care” and Medtronic’s on post-acute care.

Newport misjudged the costs associated with its Aura project; Covidien was left to pick up the pieces

So why was the Aura ventilator discontinued?

Although it is almost impossible to know what motivated Covidien’s executives, the Aura ventilator project clearly suffered from many problems. 

The Aura project was intended to meet the requirements of the US government’s BARDA program (under the auspices of the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority). In short, the program sought to create a stockpile of next generation ventilators for emergency situations — including, notably, pandemics. The ventilator would thus have to be designed for events where

mass casualties may be expected, and when shortages of experienced health care providers with respiratory support training, and shortages of ventilators and accessory components may be expected.

The Aura ventilator would thus sit somewhere between Newport’s two other ventilators: the e360 which could be used in pediatric care (for newborns smaller than 5kg) but was not intended for home care use (or the extreme scenarios envisioned by the US government); and the more portable HT70 which could be used in home care environments, but not for newborns. 

Unfortunately, the Aura failed to achieve this goal. The FDA’s 510(k) clearance decision clearly states that the Aura was not intended for newborns:

The AURA family of ventilators is applicable for infant, pediatric and adult patients greater than or equal to 5 kg (11 lbs.).

A press release issued by Medtronic confirms that

the company was unable to secure FDA approval for use in neonatal populations — a contract requirement.

And the US Government RFP confirms that this was indeed an important requirement:

The device must be able to provide the same standard of performance as current FDA pre-market cleared portable ventilators and shall have the following additional characteristics or features: 

Flexibility to accommodate a wide patient population range from neonate to adult.

Newport also seems to have been unable to deliver the ventilator at the low price it had initially forecasted — a common problem for small companies and/or companies that undertake large R&D programs. It also struggled to complete the project within the agreed-upon deadlines. As the Medtronic press release explains:

Covidien learned that Newport’s work on the ventilator design for the Government had significant gaps between what it had promised the Government and what it could deliverboth in terms of being able to achieve the cost of production specified in the contract and product features and performance. Covidien management questioned whether Newport’s ability to complete the project as agreed to in the contract was realistic.

As Jason Crawford, an engineer and tech industry commentator, put it:

Projects fail all the time. “Supplier risk” should be a standard checkbox on anyone’s contingency planning efforts. This is even more so when you deliberately push the price down to 30% of the market rate. Newport did not even necessarily expect to be profitable on the contract.

The above is mostly Covidien’s “side” of the story, of course. But other pieces of evidence lend some credibility to these claims:

  • Newport agreed to deliver its Aura ventilator at a per unit cost of less than $3000. But, even today, this seems extremely ambitious. For instance, the WHO has estimated that portable ventilators cost between $3,300 and $13,500. If Newport could profitably sell the Aura at such a low price, then there was little reason to discontinue it (readers will recall the development of the ventilator was mostly complete when Covidien put a halt to the project).
  • Covidien/Newport is not the only firm to have struggled to offer suitable ventilators at such a low price. Philips (which took Newport’s place after the government contract fell through) also failed to achieve this low price. Rather than the $2,000 price sought in the initial RFP, Philips ultimately agreed to produce the ventilators for $3,280. But it has not yet been able to produce a single ventilator under the government contract at that price.
  • Covidien has repeatedly been forced to recall some of its other ventilators ( here, here and here) — including the Newport HT70. And rival manufacturers have also faced these types of issues (for example, here and here). 

Accordingly, Covidien may well have preferred to cut its losses on the already problem-prone Aura project, before similar issues rendered it even more costly. 

In short, while it is impossible to prove that these development issues caused Covidien to pull the plug on the Aura project, it is certainly plausible that they did. This further supports the hypothesis that Covidien’s acquisition of Newport was not a killer acquisition. 

Ending the Aura project might have been an efficient outcome

As suggested above, moreover, it is entirely possible that Covidien was better able to realize the poor prospects of Newport’s Aura project and also better organized to enable it to make the requisite decision to abandon the project.

A small company like Newport faces greater difficulties abandoning entrepreneurial projects because doing so can impair a privately held firm’s ability to raise funds for subsequent projects.

Moreover, the relatively large share of revue and reputation that Newport — worth $103 million in 2012, versus Covidien’s $11.8 billion — would have realized from fulfilling a substantial US government project could well have induced it to overestimate the project’s viability and to undertake excessive risk in the (vain) hope of bringing the project to fruition.  

While there is a tendency among antitrust scholars, enforcers, and practitioners to look for (and find…) antitrust-related rationales for mergers and other corporate conduct, it remains the case that most corporate control transactions (such as mergers) are driven by the acquiring firm’s expectation that it can manage more efficiently. As Henry G. Manne put it in his seminal article, Mergers and the Market for Corporate Control (1965): 

Since, in a world of uncertainty, profitable transactions will be entered into more often by those whose information is relatively more reliable, it should not surprise us that mergers within the same industry have been a principal form of changing corporate control. Reliable information is often available to suppliers and customers as well. Thus many vertical mergers may be of the control takeover variety rather than of the “foreclosure of competitors” or scale-economies type.

Of course, the same information that renders an acquiring firm in the same line of business knowledgeable enough to operate a target more efficiently could also enable it to effect a “killer acquisition” strategy. But the important point is that a takeover by a firm with a competing product line, after which the purchased company’s product line is abandoned, is at least as consistent with a “market for corporate control” story as with a “killer acquisition” story.

Indeed, as Florian Ederer himself noted with respect to the Covidien/Newport merger, 

“Killer acquisitions” can have a nefarious image, but killing off a rival’s product was probably not the main purpose of the transaction, Ederer said. He raised the possibility that Covidien decided to kill Newport’s innovation upon realising that the development of the devices would be expensive and unlikely to result in profits.

Concluding remarks

In conclusion, Covidien’s acquisition of Newport offers a cautionary tale about reckless journalism, “blackboard economics,” and government failure.

Reckless journalism because the New York Times clearly failed to do the appropriate due diligence for its story. Its journalists notably missed (or deliberately failed to mention) a number of critical pieces of information — such as the hugely important fact that most of Covidien’s and Newport’s products did not overlap, or the fact that there were numerous competitors in the highly competitive mechanical ventilator industry. 

And yet, that did not stop the authors from publishing their extremely alarming story, effectively suggesting that a small medical device merger materially contributed to the loss of many American lives.

The story also falls prey to what Ronald Coase called “blackboard economics”:

What is studied is a system which lives in the minds of economists but not on earth. 

Numerous commentators rushed to fit the story to their preconceived narratives, failing to undertake even a rudimentary examination of the underlying market conditions before they voiced their recriminations. 

The only thing that Covidien and Newport’s merger ostensibly had in common with the killer acquisition theory was the fact that a large firm purchased a small rival, and that the one of the small firm’s products was discontinued. But this does not even begin to meet the stringent conditions that must be fulfilled for the theory to hold water. Unfortunately, critics appear to have completely ignored all contradicting evidence. 

Finally, what the New York Times piece does offer is a chilling tale of government failure.

The inception of the US government’s BARDA program dates back to 2008 — twelve years before the COVID-19 pandemic hit the US. 

The collapse of the Aura project is no excuse for the fact that, more than six years after the Newport contract fell through, the US government still has not obtained the necessary ventilators. Questions should also be raised about the government’s decision to effectively put all of its eggs in the same basket — twice. If anything, it is thus government failure that was the real culprit. 

And yet the New York Times piece and the critics shouting “killer acquisition!” effectively give the US government’s abject failure here a free pass — all in the service of pursuing their preferred “killer story.”

Qualcomm is currently in the midst of a high-profile antitrust case against the FTC. At the heart of these proceedings lies Qualcomm’s so-called “No License, No Chips” (NLNC) policy, whereby it purportedly refuses to sell chips to OEMs that have not concluded a license agreement covering its underlying intellectual property. According to the FTC and Qualcomm’s opponents, this ultimately thwarts competition in the chipset market.

Against this backdrop, Mark Lemley, Douglas Melamed, and Steven Salop penned a high-profile amicus brief supporting the FTC’s stance. 

We responded to their brief in a Truth on the Market blog post, and this led to a series of blog exchanges between the amici and ourselves. 

This post summarizes these exchanges.

1. Amicus brief supporting the FTC’s stance, and ICLE brief in support of Qualcomm’s position

The starting point of this blog exchange was an Amicus brief written by Mark Lemley, Douglas Melamed, and Steven Salop (“the amici”) , and signed by 40 law and economics scholars. 

The amici made two key normative claims:

  • Qualcomm’s no license, no chips policy is unlawful under well-established antitrust principles: 
    Qualcomm uses the NLNC policy to make it more expensive for OEMs to purchase competitors’ chipsets, and thereby disadvantages rivals and creates artificial barriers to entry and competition in the chipset markets.”
  • Qualcomm’s refusal to license chip-set rivals reinforces the no license, no chips policy and violates the antitrust laws:
    Qualcomm’s refusal to license chipmakers is also unlawful, in part because it bolsters the NLNC policy.16 In addition, Qualcomm’s refusal to license chipmakers increases the costs of using rival chipsets, excludes rivals, and raises barriers to entry even if NLNC is not itself illegal.

It is important to note that ICLE also filed an amicus brief in these proceedings. Contrary to the amici, ICLE’s scholars concluded that Qualcomm’s behavior did not raise any antitrust concerns and was ultimately a matter of contract law and .

2. ICLE response to the Lemley, Melamed and Salop Amicus brief.

We responded to the amici in a first blog post

The post argued that the amici failed to convincingly show that Qualcomm’s NLNC policy was exclusionary. We notably highlighted two important factors.

  • First, Qualcomm could not use its chipset position and NLNC policy to avert the threat of FRAND litigation, thus extracting supracompetitve royalties:
    Qualcomm will be unable to charge a total price that is significantly above the price of rivals’ chips, plus the FRAND rate for its IP (and expected litigation costs).”
  • Second, Qualcomm’s behavior did not appear to fall within standard patterns of strategic behavior:
    The amici attempt to overcome this weakness by implicitly framing their argument in terms of exclusivity, strategic entry deterrence, and tying […]. But none of these arguments totally overcomes the flaw in their reasoning.” 

3. Amici’s counterargument 

The amici wrote a thoughtful response to our post. Their piece rested on two main arguments:

  • The Amici underlined that their theory of anticompetitive harm did not imply any form of profit sacrifice on Qualcomm’s part (in the chip segment):
    Manne and Auer seem to think that the concern with the no license/no chips policy is that it enables inflated patent royalties to subsidize a profit sacrifice in chip sales, as if the issue were predatory pricing in chips.  But there is no such sacrifice.
  • The deleterious effects of Qualcomm’s behavior were merely a function of its NLNC policy and strong chipset position. In conjunction, these two factors deterred OEMs from pursuing FRAND litigation:
    Qualcomm is able to charge more than $2 for the license only because it uses the power of its chip monopoly to coerce the OEMs to give up the option of negotiating in light of the otherwise applicable constraints on the royalties it can charge.

4. ICLE rebuttal

We then responded to the amici with the following points:

  • We agreed that it would be a problem if Qualcomm could prevent OEMs from negotiating license agreements in the shadow of FRAND litigation:
    The critical question is whether there is a realistic threat of litigation to constrain the royalties commanded by Qualcomm (we believe that Lemley et al. agree with us on this point).”
  • However, Qualcomm’s behavior did not preclude OEMs from pursuing this type of strategy:
    We believe the following facts support our assertion:
    OEMs have pursued various litigation strategies in order to obtain lower rates on Qualcomm’s IP. […]
    For the most part, Qualcomm’s threats to cut off chip supplies were just that: threats. […]
    OEMs also wield powerful threats. […]
    Qualcomm’s chipsets might no longer be “must-buys” in the future.”

 5. Amici’s surrebuttal

The amici sent us a final response (reproduced here in full) :

In their original post, Manne and Auer argued that the antitrust argument against Qualcomm’s no license/no chips policy was based on bad economics and bad law.  They now seem to have abandoned that argument and claim instead – contrary to the extensive factual findings of the district court – that, while Qualcomm threatened to cut off chips, it was a paper tiger that OEMs could, and knew they could, ignore.  The implication is that the Ninth Circuit should affirm the district court on the no license/ no chips issue unless it sets aside the court’s fact findings.  That seems like agreement with the position of our amicus brief.

We will not in this post review the huge factual record.  We do note, however, that Manne and Auer cite in support of their factual argument only that 3 industry giants brought and then settled litigation against Qualcomm.  But all 3 brought antitrust litigation; their doing so hardly proves that contract litigation or what Manne and Auer call “holdout” were viable options for anyone, much less for smaller OEMs.  The fact that Qualcomm found it necessary to actually cut off only one OEM – and then it only took the OEM only 7 days to capitulate – certainly does not prove that Qualcomm’s threats lacked credibility.   Notably, Manne and Auer do not claim that any OEMs bought chips from competitors of Qualcomm (although Apple bought some chips from Intel for a short while). No license/no chips appears to have been a successful, coercive policy, not an easily ignored threat.                                                                                                                                              

6. Concluding remarks

First and foremost, we would like to thank the Amici for thoughtfully engaging with us. This is what the law & economics tradition is all about: moving the ball forward by taking part in vigorous, multidisciplinary, debates.

With that said, we do feel compelled to leave readers with two short remarks. 

First, contrary to what the amici claim, we believe that our position has remained the same throughout these debates. 

Second, and more importantly, we think that everyone agrees that the critical question is whether OEMs were prevented from negotiating licenses in the shadow of FRAND litigation. 

We leave it up to Truth on the Market readers to judge which side of this debate is correct.

Last week, we posted a piece on TOTM, criticizing the amicus brief written by Mark Lemley, Douglas Melamed and Steven Salop in the ongoing Qualcomm litigation. The authors prepared a thoughtful response to our piece, which we published today on TOTM. 

In this post, we highlight the points where we agree with the amici (or at least we think so), as well as those where we differ.

Negotiating in the shadow of FRAND litigation

Let us imagine a hypothetical world, where an OEM must source one chipset from Qualcomm (i.e. this segment of the market is non-contestable) and one chipset from either Qualcomm or its  rivals (i.e. this segment is contestable). For both of these chipsets, the OEM must also reach a license agreement with Qualcomm.

We use the same number as the amici: 

  • The OEM has a reserve price of $20 for each chip/license combination. 
  • Rivals can produce chips at a cost of $11. 
  • The hypothetical FRAND benchmark is $2 per chip. 

With these numbers in mind, the critical question is whether there is a realistic threat of litigation to constrain the royalties commanded by Qualcomm (we believe that Lemley et al. agree with us on this point). The following table shows the prices that a hypothetical OEM would be willing to pay in both of these scenarios:

Blue cells are segments where QC can increase its profits if the threat of litigation is removed.

When the threat of litigation is present, Qualcomm obtains a total of $20 for the combination of non-contestable chips and IP. Qualcomm can use its chipset position to evade FRAND and charges the combined monopoly price of $20. At a chipset cost of $11, it would thus make $9 worth of profits. However, it earns only $13 for contestable chips ($2 in profits). This is because competition brings the price of chips down to $11 and Qualcomm does not have a chipset advantage to earn more than the FRAND rate for its IP.

When the threat of litigation is taken off the table, all chipsets effectively become non-contestable. Qualcomm still earns $20 for its previously non-contestable chips. But it can now raise its IP rate above the FRAND benchmark in the previously contestable segment (for example, by charging $10 for the IP). This squeezes its chipset competitors.

If our understanding of the amici’s response is correct, they argue that the combination of Qualcomm’s strong chipset position and its “No License, No Chips” policy (“NLNC”) effectively nullifies the threat of litigation:

Qualcomm is able to charge more than $2 for the license only because it uses the power of its chip monopoly to coerce the OEMs to give up the option of negotiating in light of the otherwise applicable constraints on the royalties it can charge. 

According to the amici, the market thus moves from a state of imperfect competition (where OEMs would pay $33 for two chips and QC’s license) to a world of monopoly (where they pay the full $40).

We beg to differ. 

Our points of disagreement

From an economic standpoint, the critical question is the extent to which Qualcomm’s chipset position and its NLNC policy deter OEMs from obtaining closer-to-FRAND rates.

While the case record is mixed and contains some ambiguities, we think it strongly suggests that Qualcomm’s chipset position and its NLNC policy do not preclude OEMs from using litigation to obtain rates that are close to the FRAND benchmark. There is thus no reason to believe that it can exclude its chipset rivals.

We believe the following facts support our assertion:

  • OEMs have pursued various litigation strategies in order to obtain lower rates on Qualcomm’s IP. As we mentioned in our previous post, this was notably the case for Apple, Samsung and LG. All three companies ultimately reached settlements with Qualcomm (and these settlements were concluded in the shadow of litigation proceedings — indeed, in Apple’s case, on the second day of trial). If anything, this suggests that court proceedings are an integral part of negotiations between Qualcomm and its OEMs.
  • For the most part, Qualcomm’s threats to cut off chip supplies were just that: threats. In any negotiation, parties will try to convince their counterpart that they have a strong outside option. Qualcomm may have done so by posturing that it would not sell chips to OEMs before they concluded a license agreement. 

    However, it seems that only once did Qualcomm apparently follow through with its threats to withhold chips (against Sony). And even then, the supply cutoff lasted only seven days.

    And while many OEMs did take Qualcomm to court in order to obtain more favorable license terms, this never resulted in Qualcomm cutting off their chipset supplies. Other OEMs thus had no reason to believe that litigation would entail disruptions to their chipset supplies.
  • OEMs also wield powerful threats. These include patent holdout, litigation, vertical integration, and purchasing chips from Qualcomm’s rivals. And of course they have aggressively pursued the bringing of this and other litigation around the world by antitrust authorities — even quite possibly manipulating the record to bolster their cases. Here’s how one observer sums up Apple’s activity in this regard:

    “Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.

    Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm.” (Emphasis added)

    Moreover, the holdout and litigation paths have been strengthened by the eBay case, which significantly reduced the financial risks involved in pursuing a holdout and/or litigation strategy. Given all of this, it is far from obvious that it is Qualcomm who enjoys the stronger bargaining position here.
  • Qualcomm’s chipsets might no longer be “must-buys” in the future. Rivals have gained increasing traction over the past couple of years. And with 5G just around the corner, this momentum could conceivably accelerate. Whether or not one believes that this will ultimately be the case, the trend surely places additional constraints on Qualcomm’s conduct. Aggressive behavior today may spur disgruntled rivals to enter the chipset market or switch suppliers tomorrow.

To summarize, as we understand their response, the delta between supracompetitive and competitive prices is entirely a function of Qualcomm’s ability to charge supra-FRAND prices for its licenses. On this we agree. But, unlike Lemley et al., we do not agree that Qualcomm is in a position to evade its FRAND pledges by using its strong position in the chipset market and its NLNC policy.

Finally, it must be said again: To the extent that that is the problem — the charging of supra-FRAND prices for licenses — the issue is manifestly a contract issue, not an antitrust one. All of the complexity of the case would fall away, and the litigation would be straightforward. But the opponents of Qualcomm’s practices do not really want to ensure that Qualcomm lowers its royalties by this delta; if they did, they would be bringing/supporting FRAND litigation. What the amici and Qualcomm’s contracting partners appear to want is to use antitrust litigation to force Qualcomm to license its technology at even lower rates — to force Qualcomm into a different business model in order to reset the baseline from which FRAND prices are determined (i.e., at the chip level, rather than at the device level). That may be an intelligible business strategy from the perspective of Qualcomm’s competitors, but it certainly isn’t sensible antitrust policy.

Qualcomm is currently in the midst of a high-profile antitrust case against the FTC. At the heart of these proceedings lies Qualcomm’s so-called “No License, No Chips” (NLNC) policy, whereby it purportedly refuses to sell chips to OEMs that have not concluded a license agreement covering its underlying intellectual property. According to the FTC and Qualcomm’s opponents, this ultimately thwarts competition in the chipset market.

But Qualcomm’s critics fail to convincingly explain how NLNC averts competition — a failing that is particularly evident in the short hypothetical put forward in the amicus brief penned by Mark Lemley, Douglas Melamed, and Steven Salop. This blog post responds to their brief. 

The amici’s hypothetical

In order to highlight the most salient features of the case against Qualcomm, the brief’s authors offer the following stylized example:

A hypothetical example can illustrate how Qualcomm’s strategy increases the royalties it is able to charge OEMs. Suppose that the reasonable royalty Qualcomm could charge OEMs if it licensed the patents separately from its chipsets is $2, and that the monopoly price of Qualcomm’s chips is $18 for an all-in monopoly cost to OEMs of $20. Suppose that a new chipmaker entrant is able to manufacture chipsets of comparable quality at a cost of $11 each. In that case, the rival chipmaker entrant could sell its chips to OEMs for slightly more than $11. An OEM’s all-in cost of buying from the new entrant would be slightly above $13 (i.e., the Qualcomm reasonable license royalty of $2 plus the entrant chipmaker’s price of slightly more than $11). This entry into the chipset market would induce price competition for chips. Qualcomm would still be entitled to its patent royalties of $2, but it would no longer be able to charge the monopoly all-in price of $20. The competition would force Qualcomm to reduce its chipset prices from $18 down to something closer to $11 and its all-in price from $20 down to something closer to $13.

Qualcomm’s NLNC policy prevents this competition. To illustrate, suppose instead that Qualcomm implements the NLNC policy, raising its patent royalty to $10 and cutting the chip price to $10. The all-in cost to an OEM that buys Qualcomm chips will be maintained at the monopoly level of $20. But the OEM’s cost of using the rival entrant’s chipsets now will increase to a level above $21 (i.e., the slightly higher than $11 price for the entrant’s chipset plus the $10 royalty that the OEM pays to Qualcomm of $10). Because the cost of using the entrant’s chipsets will exceed Qualcomm’s all-in monopoly price, Qualcomm will face no competitive pressure to reduce its chipset or all-in prices.

A close inspection reveals that this hypothetical is deeply flawed

There appear to be five steps in the amici’s reasoning:

  1. Chips and IP are complementary goods that are bought in fixed proportions. So buyers have a single reserve price for both; 
  2. Because of its FRAND pledges, Qualcomm is unable to directly charge a monopoly price for its IP;
  3. But, according to the amici, Qualcomm can obtain these monopoly profits by keeping competitors out of the chipset market [this would give Qualcomm a chipset monopoly and, theoretically at least, enable it to charge the combined (IP + chips) monopoly price for its chips alone, thus effectively evading its FRAND pledges]; 
  4. To keep rivals out of the chipset market, Qualcomm undercuts them on chip prices and recoups its losses by charging supracompetitive royalty rates on its IP.
  5. This is allegedly made possible by the “No License, No Chips” policy, which forces firms to obtain a license from Qualcomm, even when they purchase chips from rivals.

While points 1 and 3 of the amici’s reasoning are uncontroversial, points 2 and 4 are mutually exclusive. This flaw ultimately undermines their entire argument, notably point 5. 

The contradiction between points 2 and 4 is evident. The amici argue (using hypothetical but representative numbers) that its FRAND pledges should prevent Qualcomm from charging more than $2 in royalties per chip (“the reasonable royalty Qualcomm could charge OEMs if it licensed the patents separately from its chipsets is $2”), and that Qualcomm deters entry in the chip market by charging $10 in royalties per chip sold (“raising its patent royalty to $10 and cutting the chip price to $10”).

But these statements cannot both be true. Qualcomm either can or it cannot charge more than $2 in royalties per chip. 

There is, however, one important exception (discussed below): parties can mutually agree to depart from FRAND pricing. But let us momentarily ignore this limitation, and discuss two baseline scenarios: One where Qualcomm can evade its FRAND pledges and one where it cannot. Comparing these two settings reveals that Qualcomm cannot magically increase its profits by shifting revenue from chips to IP.

For a start, if Qualcomm cannot raise the price of its IP beyond the hypothetical FRAND benchmark ($2, in the amici’s hypo), then it cannot use its standard essential technology to compensate for foregone revenue in the chipset market. Any supracompetitive profits that it earns must thus result from its competitive position in the chipset market.

Conversely, if it can raise its IP revenue above the $2 benchmark, then it does not require a strong chipset position to earn supracompetitive profits. 

It is worth unpacking this second point. If Qualcomm can indeed evade its FRAND pledges and charge royalties of $10 per chip, then it need not exclude chipset rivals to obtain supracompetitive profits. 

Take the amici’s hypothetical numbers and assume further that Qualcomm has the same cost as its chipsets rivals (i.e. $11), and that there are 100 potential buyers with a uniform reserve price of $20 (the reserve price assumed by the amici). 

As the amici point out, Qualcomm can earn the full monopoly profits by charging $10 for IP and $10 for chips. Qualcomm would thus pocket a total of $900 in profits ((10+10-11)*100). What the amici brief fails to acknowledge is that Qualcomm could also earn the exact same profits by staying out of the chipset market. Qualcomm could let its rivals charge $11 per chip (their cost), and demand $9 for its IP. It would thus earn the same $900 of profits (9*100). 

In this hypothetical, the only reason for Qualcomm to enter the chip market is if it is a more efficient chipset producer than its chipset rivals, or if it can out-compete them with better chipsets. For instance, if Qualcomm’s costs are only $10 per chip, Qualcomm could earn a total of $1000 in profits by driving out these rivals ((10+10-10)*100). Or, if it can produce better chips, though at higher cost and price (say, $12 per chip), it could earn the same $1000 in profits ((10+12-12)*100). Both of the situations would benefit purchasers, of course. Conversely, at a higher production cost of $12 per chip, but without any quality improvement, Qualcomm would earn only $800 in profits ((10+10-12)*100) and would thus do better to exit the chipset market.

Let us recap:

  • If Qualcomm can easily evade its FRAND pledges, then it need not enter the chipset market to earn supracompetitive profits; 
  • If it cannot evade these FRAND obligations, then it will be hard-pressed to leverage its IP bottleneck so as to dominate chipsets. 

The upshot is that Qualcomm would need to benefit from exceptional circumstances in order to improperly leverage its FRAND-encumbered IP and impose anticompetitive harm by excluding its rivals in the chipset market

The NLNC policy

According to the amici, that exceptional circumstance is the NLNC policy. In their own words:

The competitive harm is a result of the royalty being higher than it would be absent the NLNC policy.

This is best understood by adding an important caveat to our previous hypothetical: The $2 FRAND benchmark of the amici’s hypothetical is only a fallback option that can be obtained via litigation. Parties are thus free to agree upon a higher rate, for instance $10. This could, notably, be the case if Qualcomm offsetted the IP increase by reducing its chipset price, such that OEMs who purchase both chipsets and IP from Qualcomm were indifferent between contracts with either of the two royalty rates.

At first sight, this caveat may appear to significantly improve the FTC’s case against Qualcomm — it raises the specter of Qualcomm charging predatory prices on its chips and then recouping its losses on IP. But further examination suggests that this is an unlikely scenario.

Though firms may nominally be paying $10 for Qualcomm’s IP and $10 for its chips, there is no escaping the fact that buyers have an outside option in both the IP and chip segments (respectively, litigation to obtain FRAND rates, and buying chips from rivals). As a result, Qualcomm will be unable to charge a total price that is significantly above the price of rivals’ chips, plus the FRAND rate for its IP (and expected litigation costs).

This is where the amici’s hypothetical is most flawed. 

It is one thing to argue that Qualcomm can charge $10 per chipset and $10 per license to firms that purchase all of their chips and IP from it (or, as the amici point out, charge a single price of $20 for the bundle). It is another matter entirely to argue — as the amici do — that Qualcomm can charge $10 for its IP to firms that receive little or no offset in the chip market because they purchase few or no chips from Qualcomm, and who have the option of suing Qualcomm, thus obtaining a license at $2 per chip (if that is, indeed, the maximum FRAND rate). Firms would have to be foolish to ignore this possibility and to acquiesce to contracts at substantially higher rates. 

Indeed, two of the largest and most powerful OEMs — Apple and Samsung — have entered into such contracts with Qualcomm. Given their ability (and, indeed, willingness) to sue for FRAND violations and to produce their own chips or assist other manufacturers in doing so, it is difficult to conclude that they have assented to supracompetitive terms. (The fact that they would prefer even lower rates, and have supported this and other antitrust suits against Qualcomm doesn’t, change this conclusion; it just means they see antitrust as a tool to reduce their costs. And the fact that Apple settled its own FRAND and antitrust suit against Qualcomm (and paid Qualcomm $4.5 billion and entered into a global licensing agreement with it) after just one day of trial further supports this conclusion).

Double counting

The amici attempt to overcome this weakness by implicitly framing their argument in terms of exclusivity, strategic entry deterrence, and tying:

An OEM cannot respond to Qualcomm’s NLNC policy by purchasing chipsets only from a rival chipset manufacturer and obtaining a license at the reasonable royalty level (i.e., $2 in the example). As the district court found, OEMs needed to procure at least some 3G CDMA and 4G LTE chipsets from Qualcomm.

* * *

The surcharge burdens rivals, leads to anticompetitive effects in the chipset markets, deters entry, and impedes follow-on innovation. 

* * *

As an economic matter, Qualcomm’s NLNC policy is analogous to the use of a tying arrangement to maintain monopoly power in the market for the tying product (here, chipsets).

But none of these arguments totally overcomes the flaw in their reasoning. Indeed, as Aldous Huxley once pointed out, “several excuses are always less convincing than one”.

For a start, the amici argue that Qualcomm uses its strong chipset position to force buyers into accepting its supracompetitive IP rates, even in those instances where they purchase chipsets from rivals. 

In making this point, the amici fall prey to the “double counting fallacy” that Robert Bork famously warned about in The Antitrust Paradox: Monopolists cannot simultaneously charge a monopoly price AND purchase exclusivity (or other contractual restrictions) from their buyers/suppliers.

The amici fail to recognize the important sacrifices that Qualcomm would have to make in order for the above strategy to be viable. In simple terms, Qualcomm would have to offset every dollar it charges above the FRAND benchmark in the IP segment with an equivalent price reduction in the chipset segment.

This has important ramifications for the FTC’s case.

Qualcomm would have to charge lower — not higher — IP fees to OEMs who purchased a large share of their chips from third party chipmakers. Otherwise, there would be no carrot to offset its greater-than-FRAND license fees, and these OEMs would have significant incentives to sue (especially in a post-eBay world where the threat of injunctions is reduced if they happen to lose). 

And yet, this is the exact opposite of what the FTC alleged:

Qualcomm sometimes expressly charged higher royalties on phones that used rivals’ chips. And even when it did not, its provision of incentive funds to offset its license fees when OEMs bought its chips effectively resulted in a discriminatory surcharge. (emphasis added)

The infeasibility of alternative explanations

One theoretical workaround would be for Qualcomm to purchase exclusivity from its OEMs, in an attempt to foreclose chipset rivals. 

Once again, Bork’s double counting argument suggests that this would be particularly onerous. By accepting exclusivity-type requirements, OEMs would not only be reducing potential competition in the chipset market, they would also be contributing to an outcome where Qualcomm could evade its FRAND pledges in the IP segment of the market. This is particularly true for pivotal OEMs (such as Apple and Samsung), who may single-handedly affect the market’s long-term trajectory. 

The amici completely overlook this possibility, while the FTC argues that this may explain the rebates that Qulacomm gave to Apple. 

But even if the rebates Qualcomm gave Apple amounted to de facto exclusivity, there are still important objections. Authorities would notably need to prove that Qualcomm could recoup its initial losses (i.e. that the rebate maximized Qualcomm’s long-term profits). If this was not the case, then the rebates may simply be due to either efficiency considerations or Apple’s significant bargaining power (Apple is routinely cited as a potential source of patent holdout; see, e.g., here and here). 

Another alternative would be for Qualcomm to evict its chipset rivals through strategic entry deterrence or limit pricing (see here and here, respectively). But while the economic literature suggests that incumbents may indeed forgo short-term profits in order to deter rivals from entering the market, these theories generally rest on assumptions of imperfect information and/or strategic commitments. Neither of these factors was alleged in the case at hand.

In particular, there is no sense that Qualcomm’s purported decision to shift royalties from chips to IP somehow harms its short-term profits, or that it is merely a strategic device used to deter the entry of rivals. As the amici themselves seem to acknowledge, the pricing structure maximizes Qualcomm’s short term revenue (even ignoring potential efficiency considerations). 

Note that this is not just a matter of economic policy. The case law relating to unilateral conduct infringements — be it Brooke Group, Alcoa, or Aspen Skiing — almost systematically requires some form of profit sacrifice on the part of the monopolist. (For a legal analysis of this issue in the Qualcomm case, see ICLE’s Amicus brief, and yesterday’s blog post on the topic).

The amici are thus left with the argument that Qualcomm could structure its prices differently, so as to maximize the profits of its rivals. Why it would choose to do so, or should indeed be forced to, is a whole other matter.

Finally, the amici refer to the strategic tying literature (here), typically associated with the Microsoft case and the so-called “platform threat”. But this analogy is highly problematic. 

Unlike Microsoft and its Internet Explorer browser, Qualcomm’s IP is de facto — and necessarily — tied to the chips that practice its technology. This is not a bug, it is a feature of the patent system. Qualcomm is entitled to royalties, whether it manufactures chips itself or leaves that task to rival manufacturers. In other words, there is no counterfactual world where OEMs could obtain Qualcomm-based chips without entering into some form of license agreement (whether directly or indirectly) with Qualcomm. The fact that OEMs must acquire a license that covers Qualcomm’s IP — even when they purchase chips from rivals — is part and parcel of the IP system.

In any case, there is little reason to believe that Qualcomm’s decision to license its IP at the OEM level is somehow exclusionary. The gist of the strategic tying literature is that incumbents may use their market power in a primary market to thwart entry in the market for a complementary good (and ultimately prevent rivals from using their newfound position in the complementary market in order to overthrow the incumbent in the primary market; Carlton & Waldman, 2002). But this is not the case here.

Qualcomm does not appear to be using what little power it might have in the IP segment in order to dominate its rivals in the chip market. As has already been explained above, doing so would imply some profit sacrifice in the IP segment in order to encourage OEMs to accept its IP/chipset bundle, rather than rivals’ offerings. This is the exact opposite of what the FTC and amici allege in the case at hand. The facts thus cut against a conjecture of strategic tying.

Conclusion

So where does this leave the amici and their brief? 

Absent further evidence, their conclusion that Qualcomm injured competition is untenable. There is no evidence that Qualcomm’s pricing structure — enacted through the NLNC policy — significantly harmed competition to the detriment of consumers. 

When all is done and dusted, the amici’s brief ultimately amounts to an assertion that Qualcomm should be made to license its intellectual property at a rate that — in their estimation — is closer to the FRAND benchmark. That judgment is a matter of contract law, not antitrust.