Archives For Competition law

Earlier this year the UK government announced it was adopting the main recommendations of the Furman Report into competition in digital markets and setting up a “Digital Markets Taskforce” to oversee those recommendations being put into practice. The Competition and Markets Authority’s digital advertising market study largely came to similar conclusions (indeed, in places it reads as if the CMA worked backwards from those conclusions).

The Furman Report recommended that the UK should overhaul its competition regime with some quite significant changes to regulate the conduct of large digital platforms and make it harder for them to acquire other companies. But, while the Report’s panel is accomplished and its tone is sober and even-handed, the evidence on which it is based does not justify the recommendations it makes.

Most of the citations in the Report are of news reports or simple reporting of data with no analysis, and there is very little discussion of the relevant academic literature in each area, even to give a summary of it. In some cases, evidence and logic are misused to justify intuitions that are just not supported by the facts.

Killer acquisitions

One particularly bad example is the report’s discussion of mergers in digital markets. The Report provides a single citation to support its proposals on the question of so-called “killer acquisitions” — acquisitions where incumbent firms acquire innovative startups to kill their rival product and avoid competing on the merits. The concern is that these mergers slip under the radar of current merger control either because the transaction is too small, or because the purchased firm is not yet in competition with the incumbent. But the paper the Report cites, by Colleen Cunningham, Florian Ederer and Song Ma, looks only at the pharmaceutical industry. 

The Furman Report says that “in the absence of any detailed analysis of the digital sector, these results can be roughly informative”. But there are several important differences between the drug markets the paper considers and the digital markets the Furman Report is focused on. 

The scenario described in the Cunningham, et al. paper is of a patent holder buying a direct competitor that has come up with a drug that emulates the patent holder’s drug without infringing on the patent. As the Cunningham, et al. paper demonstrates, decreases in development rates are a feature of acquisitions where the acquiring company holds a patent for a similar product that is far from expiry. The closer a patent is to expiry, the less likely an associated “killer” acquisition is. 

But tech typically doesn’t have the clear and predictable IP protections that would make such strategies reliable. The long and uncertain development and approval process involved in bringing a drug to market may also be a factor.

There are many more differences between tech acquisitions and the “killer acquisitions” in pharma that the Cunningham, et al. paper describes. SO-called “acqui-hires,” where a company is acquired in order to hire its workforce en masse, are common in tech and explicitly ruled out of being “killers” by this paper, for example: it is not harmful to overall innovation or output overall if a team is moved to a more productive project after an acquisition. And network effects, although sometimes troubling from a competition perspective, can also make mergers of platforms beneficial for users by growing the size of that platform (because, of course, one of the points of a network is its size).

The Cunningham, et al. paper estimates that 5.3% of pharma acquisitions are “killers”. While that may seem low, some might say it’s still 5.3% too much. However, it’s not obvious that a merger review authority could bring that number closer to zero without also rejecting more mergers that are good for consumers, making people worse off overall. Given the number of factors that are specific to pharma and that do not apply to tech, it is dubious whether the findings of this paper are useful to the Furman Report’s subject at all. Given how few acquisitions are found to be “killers” in pharma with all of these conditions present, it seems reasonable to assume that, even if this phenomenon does apply in some tech mergers, it is significantly rarer than the ~5.3% of mergers Cunningham, et al. find in pharma. As a result, the likelihood of erroneous condemnation of procompetitive mergers is significantly higher. 

In any case, there’s a fundamental disconnect between the “killer acquisitions” in the Cunningham, et al. paper and the tech acquisitions described as “killers” in the popular media. Neither Facebook’s acquisition of Instagram nor Google’s acquisition of Youtube, which FTC Commissioner Rohit Chopra recently highlighted, would count, because in neither case was the acquired company “killed.” Nor were any of the other commonly derided tech acquisitions — e.g., Facebook/Whatsapp, Google/Waze, Microsoft.LinkedIn, or Amazon/Whole Foods — “killers,” either. 

In all these high-profile cases the acquiring companies expanded the service and invested more in them. One may object that these services would have competed with their acquirers had they remained independent, but this is a totally different argument to the scenarios described in the Cunningham, et al. paper, where development of a new drug is shut down by the acquirer ostensibly to protect their existing product. It is thus extremely difficult to see how the Cunningham, et al. paper is even relevant to the digital platform context, let alone how it could justify a wholesale revision of the merger regime as applied to digital platforms.

A recent paper (published after the Furman Report) does attempt to survey acquisitions by Google, Amazon, Facebook, Microsoft, and Apple. Out of 175 acquisitions in the 2015-17 period the paper surveys, only one satisfies the Cunningham, et al. paper’s criteria for being a potentially “killer” acquisition — Facebook’s acquisition of a photo sharing app called Masquerade, which had raised just $1 million in funding before being acquired.

In lieu of any actual analysis of mergers in digital markets, the Report falls back on a puzzling logic:

To date, there have been no false positives in mergers involving the major digital platforms, for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the UK and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change.

This is very poor reasoning. It does not logically follow that the (presumed) existence of false negatives implies that there has been underenforcement, because overenforcement carries costs as well. Moreover, there are strong reasons to think that false positives in these markets are more costly than false negatives. A well-run court system might still fail to convict a few criminals because the cost of accidentally convicting an innocent person was so high.

The UK’s competition authority did commission an ex post review of six historical mergers in digital markets, including Facebook/Instagram and Google/Waze, two of the most controversial in the UK. Although it did suggest that the review process could have been done differently, it also highlighted efficiencies that arose from each, and did not conclude that any has led to consumer detriment.

Recommendations

The Report is vague about which mergers it considers to have been uncompetitive, and apart from the aforementioned text it does not really attempt to justify its recommendations around merger control. 

Despite this, the Report recommends a shift to a ‘balance of harms’ approach. Under the current regime, merger review focuses on the likelihood that a merger would reduce competition which, at least, gives clarity about the factors to be considered. A ‘balance of harms’ approach would require the potential scale (size) of the merged company to be considered as well. 

This could give basis for blocking any merger at all on ‘scale’ grounds. After all, if a photo editing app with a sharing timeline can grow into the world’s second largest social network, how could a competition authority say with any confidence that some other acquisition might not prevent the emergence of a new platform on a similar scale, however unlikely? This could provide a basis for blocking almost any acquisition by an incumbent firm, and make merger review an even more opaque and uncertain process than it currently is, potentially deterring efficiency-raising mergers or leading startups that would like to be acquired to set up and operate overseas instead (or not to be started up in the first place).

The treatment of mergers is just one example of the shallowness of the Report. In many other cases — the discussions of concentration and barriers to entry in digital markets, for example — big changes are recommended on the basis of a handful of papers or less. Intuition repeatedly trumps evidence and academic research.

The Report’s subject is incredibly broad, of course, and one might argue that such a limited, casual approach is inevitable. In this sense the Report may function perfectly well as an opening brief introducing the potential range of problems in the digital economy that a rational competition authority might consider addressing. But the complexity and uncertainty of the issues is no reason to eschew rigorous, detailed analysis before determining that a compelling case has been made. Adopting the Report’s assumptions — and in many cases that is the very most one can say of them — of harm and remedial recommendations on the limited bases it offers is sure to lead to erroneous enforcement of competition law in a way that would reduce, rather than enhance, consumer welfare.

The Wall Street Journal reports that Amazon employees have been using data from individual sellers to identify products to compete with with its own ‘private label’ (or own-brand) products, such as AmazonBasics, Presto!, and Pinzon.

It’s implausible that this is an antitrust problem, as some have suggested. It’s extremely common for retailers to sell their own private label products and use data on how other products in their stores have sold to help development and marketing. They account for about 14–17% of overall US retail sales, and for an estimated 19% of Walmart’s and Kroger’s sales and 29% of Costco’s sales of consumer packaged goods. 

And Amazon accounts for 39% of US e-commerce spending, and about 6% of all US retail spending. Any antitrust-based argument against Amazon doing this should also apply to Walmart, Kroger and Costco as well. In other words, the case against Amazon proves too much. Alec Stapp has a good discussion of these and related facts here.

However, it is interesting to think about the underlying incentives facing Amazon here, and in particular why Amazon’s company policy is not to use individual seller data to develop products (rogue employees violating this policy, notwithstanding). One possibility is that it is a way for Amazon to balance its competition with some third parties with protections for others that it sees as valuable to its platform overall.

Amazon does use aggregated seller data to develop and market its products. If two or more merchants are selling a product, Amazon’s employees can see how popular it is. This might seem like a trivial distinction, but it might exist for good reason. It could be because sellers of unique products actually do have the bargaining power to demand that Amazon does not use their data to compete with them, or for public relations reasons, although it’s not clear how successful that has been. 

But another possibility is that it may be a self-imposed restraint. Amazon sells its own private label products partially because doing so is profitable (even when undercutting rivals), partially to fill holes in product lines (like clothing, where 11% of listings were Amazon private label as of November 2018), and partially because it increases users’ likelihood to use Amazon if they expect to find a reliable product from a brand they trust. According to the Journal, they account for less than 1% of Amazon’s product sales, in contrast to the 19% of revenues ($54 billion) Amazon makes from third party seller services, which includes Marketplace commissions. Any analysis that ignores that Amazon has to balance those sources of revenue, and so has to tread carefully, is deficient. 

With “commodity” products (like, say, batteries and USB cables), where multiple sellers are offering very similar or identical versions of the same thing, private label competition works well for both Amazon and consumers. By Amazon’s own rules it can enter this market using aggregated data, but this doesn’t give it a significant advantage, since that data is easily obtainable from multiple sources, including Amazon itself, which makes detailed aggregated sales data freely available to third-party retailers

But to the extent that Amazon competes against innovative third-party sellers (typically manufacturers doing direct sales, as opposed to pure retailers simply re-selling others’ products), there is a possibility that the prospect of having to compete with Amazon may diminish their incentive to develop new products and sell them on Amazon’s platform. 

This is the strongest argument that is made against private label offerings in general. When they involve some level of copying an innovative product, where the innovator has been collecting above-normal profits and those profits are what spur the innovation in the first place, a private label product that comes along and copies the product effectively free rides on the innovation and captures some of its return. That may get us less innovation than society—or a platform trying to host as many innovative products as possible—would like.

While the Journal conflates these two kinds of products, Amazon’s own policies may be tailored specifically to take account of the distinction, and maximise the total value of its marketplace to consumers.

This is nominally the focus of the Journal story: a car trunk organiser company with an (apparently) innovative product says that Amazon moving in to compete with its own AmazonBasics version competed away many of its sales. In this sort of situation, the free-rider problem described above might apply where future innovation is discouraged. Why bother to invent things like this if you’re just going to have your invention ripped off?

Of course, many such innovations are protected by patents. But there may be valuable innovations that are not, and even patented innovations are not perfectly protected given the costs of enforcement. But a platform like Amazon can adopt rules that fine-tune the protections offered by the legal system in an effort to increase the value of the platform for both innovators and consumers alike.

And that may be why Amazon has its rule against using individual seller data to compete: to allow creators of new products to collect more rents from their inventions, with a promise that, unless and until their product is commodified by other means (as indicated by the product being available from multiple other sellers), Amazon won’t compete against such sellers using any special insights it might have from that seller using Amazon’s Marketplace. 

This doesn’t mean Amazon refuses to compete (or refuses to allow others to compete); it has other rules that sometimes determine that boundary, as when it enters into agreements with certain brands to permit sales of the brand on the platform only by sellers authorized by the brand owner. Rather, this rule is a more limited—but perhaps no less important—one that should entice innovators to use Amazon’s platform to sell their products without concern that doing so will create a special risk that Amazon can compete away their returns using information uniquely available to it. In effect, it’s a promise that innovators won’t lose more by choosing to sell on Amazon rather than through other retail channels.. 

Like other platforms, to maximise its profits Amazon needs to strike a balance between being an attractive place for third party merchants to sell their goods, and being attractive to consumers by offering as many inexpensive, innovative, and reliable products as possible. Striking that balance is challenging, but a rule that restrains the platform from using its unique position to expropriate value from innovative sellers helps to protect the income of genuinely innovative third parties, and induces them to sell products consumers want on Amazon, while still allowing Amazon (and third-party sellers) to compete with commodity products. 

The fact that Amazon has strong competition online and offline certainly acts as an important constraint here, too: if Amazon behaved too badly, third parties might not sell on it at all, and Amazon would have none of the seller data that is allegedly so valuable to it.

But even in a world where Amazon had a huge, sticky customer base that meant it was not an option to sell elsewhere—which the Journal article somewhat improbably implies—Amazon would still need third parties to innovate and sell things on its platform. 

What the Journal story really seems to demonstrate is the sort of genuine principal-agent problem that all large businesses face: the company as a whole needs to restrain its private label section in various respects but its agents in the private label section want to break those rules to maximise their personal performance (in this case, by launching a successful new AmazonBasics product). It’s like a rogue trader at a bank who breaks the rules to make herself look good by, she hopes, getting good results.This is just one of many rules that a platform like Amazon has to preserve the value of its platform. It’s probably not the most important one. But understanding why it exists may help us to understand why simple stories of platform predation don’t add up, and help to demonstrate the mechanisms that companies like Amazon use to maximise the total value of their platform, not just one part of it.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ramaz Samrout, (Principal, REIM Strategies; Lay Member, Competition Tribunal of Canada)]

At a time when nations are engaged in bidding wars in the worldwide market to alleviate the shortages of critical medical necessities for the Covid-19 crisis, it certainly bares the question, have free trade and competition policies resulting in efficient global integrated market networks gone too far? Did economists and policy makers advocating for efficient competitive markets not foresee a failure of the supply chain in meeting a surge in demand during an inevitable global crisis such as this one?

The failures in securing medical supplies have escalated a global health crisis to geopolitical spats fuelled by strong nationalistic public sentiments. In the process of competing to acquire highly treasured medical equipment, governments are confiscating, outbidding, and diverting shipments at the risk of not adhering to the terms of established free trade agreements and international trading rules, all at the cost of the humanitarian needs of other nations.

Since the start of the Covid-19 crisis, all levels of government in Canada have been working on diversifying the supply chain for critical equipment both domestically and internationally. But, most importantly, these governments are bolstering domestic production and an integrated domestic supply network recognizing the increasing likelihood of tightening borders impacting the movement of critical products.

For the past 3 weeks in his daily briefings, Canada’s Prime Minister, Justin Trudeau, has repeatedly confirmed the Government’s support of domestic enterprises that are switching their manufacturing lines to produce critical medical supplies and of other “made in Canada” products.

As conditions worsen in the US and the White House hardens its position towards collaboration and sharing for the greater global humanitarian good—even in the presence of a recent bilateral agreement to keep the movement of essential goods fluid—Canada’s response has become more retaliatory. Now shifting to a message emphasizing that the need for “made in Canada” products is one of extreme urgency.

On April 3rd, President Trump ordered Minnesota-based 3M to stop exporting medical-grade masks to Canada and Latin America; a decision that was enabled by the triggering of the 1950 Defence Production Act. In response, Ontario Premier, Doug Ford, stated in his public address:

Never again in the history of Canada should we ever be beholden to companies around the world for the safety and wellbeing of the people of Canada. There is nothing we can’t build right here in Ontario. As we get these companies round up and we get through this, we can’t be going over to other sources because we’re going to save a nickel.

Premier Ford’s words ring true for many Canadians as they watch this crisis unfold and wonder where would it stop if the crisis worsens? Will our neighbour to the south block shipments of a Covid-19 vaccine when it is developed? Will it extend to other essential goods such as food or medicine? 

There are reports that the decline in the number of foreign workers in farming caused by travel restrictions and quarantine rules in both Canada and the US will cause food production shortages, which makes the actions of the White House very unsettling for Canadians.  Canada’s exports to the US constitute 75% of total Canadian exports, while imports from the US constitute 46%. Canada’s imports of food and beverages from the US were valued at US $24 billion in 2018 including: prepared foods, fresh vegetables, fresh fruits, other snack foods, and non-alcoholic beverages.

The length and depth of the crisis will determine to what extent the US and Canadian markets will experience shortages in products. For Canada, the severity of the pandemic in the US could result in further restrictions on the border. And it is becoming progressively more likely that it will also result in a significant reduction in the volume of necessities crossing the border between the two nations.

Increasingly, the depth and pain experienced from shortages in necessities will shape public sentiment towards free trade and strengthen mainstream demands of more nationalistic and protectionist policies. This will result in more pressure on political and government establishments to take action.

The reliance on free trade and competition policies favouring highly integrated supply chain networks is showing cracks in meeting national interests in this time of crisis. This goes well beyond the usual economic factors of contention between countries of domestic employment, job loss and resource allocation. The need for correction, however, risks moving the pendulum too far to the side of protectionism.

Free trade setbacks and global integration disruptions would become the new economic reality to ensure that domestic self-sufficiency comes first. A new trade trend has been set in motion and there is no going back from some level of disintegrating globalised supply chain productions.

How would domestic self-sufficiency be achieved? 

Would international conglomerates build local plants and forgo their profit maximizing strategies of producing in growing economies that offer cheap wages and resources in order to avoid increased protectionism?

Will the Canada-United States-Mexico Agreement (CUSMA) known as the NEW NAFTA, which until today has not been put into effect, be renegotiated to allow for production measures for securing domestic necessities in the form of higher tariffs, trade quotas, and state subsidies?

Are advanced capitalist economies willing to create State-Owned Industries to produce domestic products for what it deems necessities?

Many other trade policy variations and options focused on protectionism are possible which could lead to the creation of domestic monopolies. Furthermore, any return to protected national production networks will reduce consumer welfare and eventually impede technological advancements that result from competition. 

Divergence between free trade agreements and competition policy in a new era of protectionism.

For the past 30 years, national competition laws and policies have increasingly become an integrated part of free trade agreements, albeit in the form of soft competition law language, making references to the parties’ respective competition laws, and the need for transparency, procedural fairness in enforcement, and cooperation.

Similarly, free trade objectives and frameworks have become part of the design and implementation of competition legislation and, subsequently, case law. Both of which are intended to encourage competitive market systems and efficiency, an implied by-product of open markets.

In that regard, the competition legal framework in Canada, the Competition Act, seeks to maintain and strengthen competitive market forces by encouraging maximum efficiency in the use of economic resources. Provisions to determine the level of competitiveness in the market consider barriers to entry, among them, tariff and non-tariff barriers to international trade. These provisions further direct adjudicators to examine free trade agreements currently in force and their role in facilitating the current or future possibility of an international incumbent entering the market to preserve or increase competition. And it goes further to also assess the extent of an increase in the real value of exports, or substitution of domestic products for imported products.

It is evident in the design of free trade agreements and competition legislation that efficiency, competition in price, and diversification of products is to be achieved by access to imported goods and by encouraging the creation of global competitive suppliers.

Therefore, the re-emergence of protectionist nationalistic measures in international trade will result in a divergence between competition laws and free trade agreements. Such setbacks would leave competition enforcers, administrators, and adjudicators grappling with the conflict between the economic principles set out in competition law and the policy objectives that could be stipulated in future trade agreements. 

The challenge ahead facing governments and industries is how to correct for the cracks in the current globalized competitive supply networks that have been revealed during this crisis without falling into a trap of nationalism and protectionism.

This is the fourth, and last, in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here, and the second here, and the third here). It draws on research from a soon-to-be published ICLE white paper.

The previous parts of this series have mostly focused on the Commission’s factual and legal conclusions. However, as this blog post points out, the case’s economic underpinnings also suffer from important weaknesses.

Two problems are particularly salient: First, the economic models cited by the Commission (discussed in an official paper, but not directly in the decision) poorly match the underlying facts. Second, the Commission’s conclusions on innovation harms are out of touch with the abundant economic literature regarding the potential link between market structure and innovation.

The wrong economic models

The Commission’s Chief Economist team outlined its economic reasoning in an article published shortly after the Android decision was published. The article reveals that the Commission relied upon three economic papers to support its conclusion that Google’s tying harmed consumer welfare.

Each of these three papers attempts to address the same basic problem. Ever since the rise of the Chicago-School, it is widely accepted that a monopolist cannot automatically raise its profits by entering an adjacent market (i.e. leveraging its monopoly position), for instance through tying. This has sometimes been called the single-monopoly-profit theory. In more recent years, various scholars have refined this Chicago-School intuition, and identified instances where the theory fails.

While the single monopoly profit theory has been criticized in academic circles, it is important to note that the three papers cited by the Commission accept its basic premise. They thus attempt to show why the theory fails in the context of the Google Android case. 

Unfortunately, the assumptions upon which they rely to reach this conclusion markedly differ from the case’s fact pattern. These papers thus offer little support to the Commission’s economic conclusions.

For a start, the authors of the first paper cited by the Commission concede that their own model does not apply to the Google case:

Actual antitrust cases are fact-intensive and our model does not perfectly fit with the current Google case in one important aspect.

The authors thus rely on important modifications, lifted from a paper by Frederico Etro and Cristina Caffara (the second paper cited by the Commission), to support their conclusion that Google’s tying was anticompetitive. 

The second paper cited by the Commission, however, is equally problematic

The authors’ underlying intuition is relatively straightforward: because Google bundles its suite of Google Apps (including Search) with the Play Store, a rival search engine would have to pay a premium in order to be pre-installed and placed on the home screen, because OEMs would have to entirely forgo Google’s suite of applications. The key assumption here is that OEMs cannot obtain the Google Play app and pre-install and place favorably a rival search app

But this is simply not true of Google’s contractual terms. The best evidence is that rivals search apps have indeed concluded deals with OEMs to pre-install their search apps, without these OEMs losing access to Google’s suite of proprietary apps. Google’s contractual terms simply do not force OEMs to choose between the Google Play app and the pre-installation of a rival search app. Etro and Caffara’s model thus falls flat.

More fundamentally, even if Google’s contractual terms did prevent OEMs from pre-loading rival apps, the paper’s conclusions would still be deeply flawed. The authors essentially assume that the only way for consumers to obtain a rival app is through pre-installation. But this is a severe misreading of the prevailing market conditions. 

Users remain free to independently download rival search apps. If Google did indeed purchase exclusive pre-installation, users would not have to choose between a “full Android” device and one with a rival search app but none of Google’s apps. Instead, they could download the rival app and place it alongside Google’s applications. 

A more efficient rival could even provide side payments, of some sort, to encourage consumers to download its app. Exclusive pre-installation thus generates a much smaller advantage than Etro and Caffara assume, and their model fails to reflect this.

Finally, the third paper by Alexandre de Cornière and Greg Taylor, suffers from the exact same problem. The authors clearly acknowledge that their findings only hold if OEMs (and consumers) are effectively prevented from (pre-)installing applications that compete with Google’s apps. In their own words:

Upstream firms offer contracts to the downstream firm, who chooses which component(s) to use and then sells to consumers. For our theory to apply, the following three conditions need to hold: (i) substitutability between the two versions of B leads the downstream firm to install at most one version.

The upshot is that all three of the economic models cited by the Commission cease to be relevant in the specific context of the Google Android decision. The Commission is thus left with little to no economic evidence to support its finding of anticompetitive effects.

Critics might argue that direct downloads by consumers are but a theoretical possibility. Yet nothing could be further from the truth. Take the web browser market: The Samsung Internet Browser has more than 1 Billion downloads on Google’s Play Store. The Opera, Opera Mini and Firefox browsers each have over a 100 million downloads. The Brave browser has more than 10 million downloads, but is growing rapidly.

In short the economic papers on which the Commission relies are based on a world that does not exist. They thus fail to support the Commission’s economic findings.

An incorrect view of innovation

In its decision, the Commission repeatedly claimed that Google’s behavior stifled innovation because it prevented rivals from entering the market. However, the Commission offered no evidence to support its assumption that reduced market entry on would lead to a decrease in innovation:

(858) For the reasons set out in this Section, the Commission concludes that the tying of the Play Store and the Google Search app helps Google to maintain and strengthen its dominant position in each national market for general search services, increases barriers to entry, deters innovation and tends to harm, directly or indirectly, consumers.

(859) First, Google’s conduct makes it harder for competing general search services to gain search queries and the respective revenues and data needed to improve their services.

(861) Second, Google’s conduct increases barriers to entry by shielding Google from competition from general search services that could challenge its dominant position in the national markets for general search services:

(862) Third, by making it harder for competing general search services to gain search queries including the respective revenues and data needed to improve their services, Google’s conduct reduces the incentives of competing general search services to invest in developing innovative features, such as innovation in algorithm and user experience design.

In a nutshell, the Commission’s findings rest on the assumption that barriers to entry and more concentrated market structures necessarily reduce innovation. But this assertion is not supported by the empirical economic literature on the topic.

For example, a 2006 paper published by Richard Gilbert surveys 24 empirical studies on the topic. These studies examine the link between market structure (or firm size) and innovation. Though earlier studies tended to identify a positive relationship between concentration, as well as firm size, and innovation, more recent empirical techniques found no significant relationship. Gilbert thus suggests that:

These econometric studies suggest that whatever relationship exists at a general economy-wide level between industry structure and R&D is masked by differences across industries in technological opportunities, demand, and the appropriability of inventions.

This intuition is confirmed by another high-profile empirical paper by Aghion, Bloom, Blundell, Griffith, and Howitt. The authors identify an inverted-U relationship between competition and innovation. Perhaps more importantly, they point out that this relationship is affected by a number of sector-specific factors.

Finally, reviewing fifty years of research on innovation and market structure, Wesley Cohen concludes that:

Even before one controls for industry effects, the variance in R&D intensity explained by market concentration is small. Moreover, whatever relationship that exists in cross sections becomes imperceptible with the inclusion of controls for industry characteristics, whether expressed as industry fixed effects or in the form of survey-based and other measures of industry characteristics such as technological opportunity, appropriability conditions, and demand. In parallel to a decades-long accumulation of mixed results, theorists have also spawned an almost equally voluminous and equivocal literature on the link between market structure and innovation.[16]

The Commission’s stance is further weakened by the fact that investments in the Android operating system are likely affected by a weak appropriability regime. In other words, because of its open source nature, it is hard for Google to earn a return on investments in the Android OS (anyone can copy, modify and offer their own version of the OS). 

Loosely tying Google’s proprietary applications to the OS is arguably one way to solve this appropriability problem. Unfortunately, the Commission brushed these considerations aside. It argued that Google could earn some revenue from the Google Play app, as well as other potential venues. However, the Commission did not question whether these sources of income were even comparable to the sums invested by Google in the Android OS. It is thus possible that the Commission’s decision will prevent Google from earning a positive return on some future investments in the Android OS, ultimately causing it to cut back its investments and slowing innovation.

The upshot is that the Commission was simply wrong to assume that barriers to entry and more concentrated market structures would necessarily reduce innovation. This is especially true, given that Google may struggle to earn a return on its investments, absent the contractual provisions challenged by the Commission.

Conclusion

In short, the Commission’s economic analysis was severely lacking. It relied on economic models that had little to say about the market it which Google and its rivals operated. Its decisions thus reveals the inherent risk of basing antitrust decisions upon overfitted economic models. 

As if that were not enough, the Android decision also misrepresents the economic literature concerning the link (or absence thereof) between market structure and innovation. As a result, there is no reason to believe that Google’s behavior reduced innovation.

The Department of Justice began its antitrust case against IBM on January 17, 1969. The DOJ sued under the Sherman Antitrust Act, claiming IBM tried to monopolize the market for “general-purpose digital computers.” The case lasted almost thirteen years, ending on January 8, 1982 when Assistant Attorney General William Baxter declared the case to be “without merit” and dropped the charges. 

The case lasted so long, and expanded in scope so much, that by the time the trial began, “more than half of the practices the government raised as antitrust violations were related to products that did not exist in 1969.” Baltimore law professor Robert Lande said it was “the largest legal case of any kind ever filed.” Yale law professor Robert Bork called it “the antitrust division’s Vietnam.”

As the case dragged on, IBM was faced with increasingly perverse incentives. As NYU law professor Richard Epstein pointed out (emphasis added), 

Oddly, enough IBM was able to strengthen its antitrust-related legal position by reducing its market share, which it achieved through raising prices. When the suit was discontinued that share had fallen dramatically since 1969 from about 50 percent of the market to 37 percent in 1982. Only after the government suit ended did IBM lower its prices in order to increase market share.

Source: Levy & Welzer

In an interview with Vox, Tim Wu claimed that without the IBM case, Apple wouldn’t exist and we might still be using mainframe computers (emphasis added):

Vox: You said that Apple wouldn’t exist without the IBM case.

Wu: Yeah, I did say that. The case against IBM took 13 years and we didn’t get a verdict but in that time, there was the “policeman at the elbow” effect. IBM was once an all-powerful company. It’s not clear that we would have had an independent software industry, or that it would have developed that quickly, the idea of software as a product, [without this case]. That was one of the immediate benefits of that excavation.

And then the other big one is that it gave a lot of room for the personal computer to get started, and the software that surrounds the personal computer — two companies came in, Apple and Microsoft. They were sort of born in the wake of the IBM lawsuit. You know they were smart guys, but people did need the pressure off their backs.

Nobody is going to start in the shadow of Facebook and get anywhere. Snap’s been the best, but how are they doing? They’ve been halted. I think it’s a lot harder to imagine this revolutionary stuff that happened in the ’80s. If IBM had been completely unwatched by regulators, by enforcement, doing whatever they wanted, I think IBM would have held on and maybe we’d still be using mainframes, or something — a very different situation.

Steven Sinofsky, a former Microsoft executive and current Andreessen Horowitz board partner, had a different take on the matter, attributing IBM’s (belated) success in PCs to its utter failure in minicomputers (emphasis added):

IBM chose to prevent third parties from interoperating with mainframes sometimes at crazy levels (punch card formats). And then chose to defend until the end their business model of leasing … The minicomputer was a direct threat not because of technology but because of those attributes. I’ve heard people say IBM went into PCs because the antitrust loss caused them to look for growth or something. Ha. PCs were spun up because IBM was losing Minis. But everything about the PC was almost a fluke organizationally and strategically. The story of IBM regulation is told as though PCs exist because of the case.

The more likely story is that IBM got swamped by the paradigm shift from mainframes to PCs. IBM was dominant in mainframe computers which were sold to the government and large enterprises. Microsoft, Intel, and other leaders in the PC market sold to small businesses and consumers, which required an entirely different business model than IBM was structured to implement.

ABB – Always Be Bundling (Or Unbundling)

“There’s only two ways I know of to make money: bundling and unbundling.” – Jim Barksdale

In 1969, IBM unbundled its software and services from hardware sales. As many industry observers note, this action precipitated the rise of the independent software development industry. But would this have happened regardless of whether there was an ongoing antitrust case? Given that bundling and unbundling is ubiquitous in the history of the computer industry, the answer is likely yes.

As the following charts show, IBM first created an integrated solution in the mainframe market, controlling everything from raw materials and equipment to distribution and service. When PCs disrupted mainframes, the entire value chain was unbundled. Later, Microsoft bundled its operating system with applications software. 

Source: Clayton Christensen

The first smartphone to disrupt the PC market was the Apple iPhone — an integrated solution. And once the technology became “good enough” to meet the average consumer’s needs, Google modularized everything except the operating system (Android) and the app store (Google Play).

Source: SlashData
Source: Jake Nielson

Another key prong in Tim Wu’s argument that the government served as an effective “policeman at the elbow” in the IBM case is that the company adopted an open model when it entered the PC market and did not require an exclusive license from Microsoft to use its operating system. But exclusivity is only one term in a contract negotiation. In an interview with Playboy magazine in 1994, Bill Gates explained how he was able to secure favorable terms from IBM (emphasis added):

Our restricting IBM’s ability to compete with us in licensing MS-DOS to other computer makers was the key point of the negotiation. We wanted to make sure only we could license it. We did the deal with them at a fairly low price, hoping that would help popularize it. Then we could make our move because we insisted that all other business stay with us. We knew that good IBM products are usually cloned, so it didn’t take a rocket scientist to figure out that eventually we could license DOS to others. We knew that if we were ever going to make a lot of money on DOS it was going to come from the compatible guys, not from IBM. They paid us a fixed fee for DOS. We didn’t get a royalty, even though we did make some money on the deal. Other people paid a royalty. So it was always advantageous to us, the market grew and other hardware guys were able to sell units.

In this version of the story, IBM refrained from demanding an exclusive license from Microsoft not because it was fearful of antitrust enforcers but because Microsoft made significant concessions on price and capped its upside by agreeing to a fixed fee rather than a royalty. These economic and technical explanations for why IBM wasn’t able to leverage its dominant position in mainframes into the PC market are more consistent with the evidence than Wu’s “policeman at the elbow” theory.

In my next post, I will discuss the other major antitrust case that came to an end in 1982: AT&T.

Big Tech continues to be mired in “a very antitrust situation,” as President Trump put it in 2018. Antitrust advocates have zeroed in on Facebook, Google, Apple, and Amazon as their primary targets. These advocates justify their proposals by pointing to the trio of antitrust cases against IBM, AT&T, and Microsoft. Elizabeth Warren, in announcing her plan to break up the tech giants, highlighted the case against Microsoft:

The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge. The story demonstrates why promoting competition is so important: it allows new, groundbreaking companies to grow and thrive — which pushes everyone in the marketplace to offer better products and services.

Tim Wu, a law professor at Columbia University, summarized the overarching narrative recently (emphasis added):

If there is one thing I’d like the tech world to understand better, it is that the trilogy of antitrust suits against IBM, AT&T, and Microsoft played a major role in making the United States the world’s preeminent tech economy.

The IBM-AT&T-Microsoft trilogy of antitrust cases each helped prevent major monopolists from killing small firms and asserting control of the future (of the 80s, 90s, and 00s, respectively).

A list of products and firms that owe at least something to the IBM-AT&T-Microsoft trilogy.

(1) IBM: software as product, Apple, Microsoft, Intel, Seagate, Sun, Dell, Compaq

(2) AT&T: Modems, ISPs, AOL, the Internet and Web industries

(3) Microsoft: Google, Facebook, Amazon

Wu argues that by breaking up the current crop of dominant tech companies, we can sow the seeds for the next one. But this reasoning depends on an incorrect — albeit increasingly popular — reading of the history of the tech industry. Entrepreneurs take purposeful action to produce innovative products for an underserved segment of the market. They also respond to broader technological change by integrating or modularizing different products in their market. This bundling and unbundling is a never-ending process.

Whether the government distracts a dominant incumbent with a failed lawsuit (e.g., IBM), imposes an ineffective conduct remedy (e.g., Microsoft), or breaks up a government-granted national monopoly into regional monopolies (e.g., AT&T), the dynamic nature of competition between tech companies will far outweigh the effects of antitrust enforcers tilting at windmills.

In a series of posts for Truth on the Market, I will review the cases against IBM, AT&T, and Microsoft and discuss what we can learn from them. In this introductory article, I will explain the relevant concepts necessary for understanding the history of market competition in the tech industry.

Competition for the Market

In industries like tech that tend toward “winner takes most,” it’s important to distinguish between competition during the market maturation phase — when no clear winner has emerged and the technology has yet to be widely adopted — and competition after the technology has been diffused in the economy. Benedict Evans recently explained how this cycle works (emphasis added):

When a market is being created, people compete at doing the same thing better. Windows versus Mac. Office versus Lotus. MySpace versus Facebook. Eventually, someone wins, and no-one else can get in. The market opportunity has closed. Be, NeXT/Path were too late. Monopoly!

But then the winner is overtaken by something completely different that makes it irrelevant. PCs overtook mainframes. HTML/LAMP overtook Win32. iOS & Android overtook Windows. Google overtook Microsoft.

Tech antitrust too often wants to insert a competitor to the winning monopolist, when it’s too late. Meanwhile, the monopolist is made irrelevant by something that comes from totally outside the entire conversation and owes nothing to any antitrust interventions.

In antitrust parlance, this is known as competing for the market. By contrast, in more static industries where the playing field doesn’t shift so radically and the market doesn’t tip toward “winner take most,” firms compete within the market. What Benedict Evans refers to as “something completely different” is often a disruptive product.

Disruptive Innovation

As Clay Christensen explains in the Innovator’s Dilemma, a disruptive product is one that is low-quality (but fast-improving), low-margin, and targeted at an underserved segment of the market. Initially, it is rational for the incumbent firms to ignore the disruptive technology and focus on improving their legacy technology to serve high-margin customers. But once the disruptive technology improves to the point it can serve the whole market, it’s too late for the incumbent to switch technologies and catch up. This process looks like overlapping s-curves:

Source: Max Mayblum

We see these S-curves in the technology industry all the time:

Source: Benedict Evans

As Christensen explains in the Innovator’s Solution, consumer needs can be thought of as “jobs-to-be-done.” Early on, when a product is just good enough to get a job done, firms compete on product quality and pursue an integrated strategy — designing, manufacturing, and distributing the product in-house. As the underlying technology improves and the product overshoots the needs of the jobs-to-be-done, products become modular and the primary dimension of competition moves to cost and convenience. As this cycle repeats itself, companies are either bundling different modules together to create more integrated products or unbundling integrated products to create more modular products.

Moore’s Law

Source: Our World in Data

Moore’s Law is the gasoline that gets poured on the fire of technology cycles. Though this “law” is nothing more than the observation that “the number of transistors in a dense integrated circuit doubles about every two years,” the implications for dynamic competition are difficult to overstate. As Bill Gates explained in a 1994 interview with Playboy magazine, Moore’s Law means that computer power is essentially “free” from an engineering perspective:

When you have the microprocessor doubling in power every two years, in a sense you can think of computer power as almost free. So you ask, Why be in the business of making something that’s almost free? What is the scarce resource? What is it that limits being able to get value out of that infinite computing power? Software.

Exponentially smaller integrated circuits can be combined with new user interfaces and networks to create new computer classes, which themselves represent the opportunity for disruption.

Bell’s Law of Computer Classes

Source: Brad Campbell

A corollary to Moore’s Law, Bell’s law of computer classes predicts that “roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.” Originally formulated in 1972, we have seen this prediction play out in the birth of mainframes, minicomputers, workstations, personal computers, laptops, smartphones, and the Internet of Things.

Understanding these concepts — competition for the market, disruptive innovation, Moore’s Law, and Bell’s Law of Computer Classes — will be crucial for understanding the true effects (or lack thereof) of the antitrust cases against IBM, AT&T, and Microsoft. In my next post, I will look at the DOJ’s (ultimately unsuccessful) 13-year antitrust battle with IBM.

Qualcomm is currently in the midst of a high-profile antitrust case against the FTC. At the heart of these proceedings lies Qualcomm’s so-called “No License, No Chips” (NLNC) policy, whereby it purportedly refuses to sell chips to OEMs that have not concluded a license agreement covering its underlying intellectual property. According to the FTC and Qualcomm’s opponents, this ultimately thwarts competition in the chipset market.

Against this backdrop, Mark Lemley, Douglas Melamed, and Steven Salop penned a high-profile amicus brief supporting the FTC’s stance. 

We responded to their brief in a Truth on the Market blog post, and this led to a series of blog exchanges between the amici and ourselves. 

This post summarizes these exchanges.

1. Amicus brief supporting the FTC’s stance, and ICLE brief in support of Qualcomm’s position

The starting point of this blog exchange was an Amicus brief written by Mark Lemley, Douglas Melamed, and Steven Salop (“the amici”) , and signed by 40 law and economics scholars. 

The amici made two key normative claims:

  • Qualcomm’s no license, no chips policy is unlawful under well-established antitrust principles: 
    Qualcomm uses the NLNC policy to make it more expensive for OEMs to purchase competitors’ chipsets, and thereby disadvantages rivals and creates artificial barriers to entry and competition in the chipset markets.”
  • Qualcomm’s refusal to license chip-set rivals reinforces the no license, no chips policy and violates the antitrust laws:
    Qualcomm’s refusal to license chipmakers is also unlawful, in part because it bolsters the NLNC policy.16 In addition, Qualcomm’s refusal to license chipmakers increases the costs of using rival chipsets, excludes rivals, and raises barriers to entry even if NLNC is not itself illegal.

It is important to note that ICLE also filed an amicus brief in these proceedings. Contrary to the amici, ICLE’s scholars concluded that Qualcomm’s behavior did not raise any antitrust concerns and was ultimately a matter of contract law and .

2. ICLE response to the Lemley, Melamed and Salop Amicus brief.

We responded to the amici in a first blog post

The post argued that the amici failed to convincingly show that Qualcomm’s NLNC policy was exclusionary. We notably highlighted two important factors.

  • First, Qualcomm could not use its chipset position and NLNC policy to avert the threat of FRAND litigation, thus extracting supracompetitve royalties:
    Qualcomm will be unable to charge a total price that is significantly above the price of rivals’ chips, plus the FRAND rate for its IP (and expected litigation costs).”
  • Second, Qualcomm’s behavior did not appear to fall within standard patterns of strategic behavior:
    The amici attempt to overcome this weakness by implicitly framing their argument in terms of exclusivity, strategic entry deterrence, and tying […]. But none of these arguments totally overcomes the flaw in their reasoning.” 

3. Amici’s counterargument 

The amici wrote a thoughtful response to our post. Their piece rested on two main arguments:

  • The Amici underlined that their theory of anticompetitive harm did not imply any form of profit sacrifice on Qualcomm’s part (in the chip segment):
    Manne and Auer seem to think that the concern with the no license/no chips policy is that it enables inflated patent royalties to subsidize a profit sacrifice in chip sales, as if the issue were predatory pricing in chips.  But there is no such sacrifice.
  • The deleterious effects of Qualcomm’s behavior were merely a function of its NLNC policy and strong chipset position. In conjunction, these two factors deterred OEMs from pursuing FRAND litigation:
    Qualcomm is able to charge more than $2 for the license only because it uses the power of its chip monopoly to coerce the OEMs to give up the option of negotiating in light of the otherwise applicable constraints on the royalties it can charge.

4. ICLE rebuttal

We then responded to the amici with the following points:

  • We agreed that it would be a problem if Qualcomm could prevent OEMs from negotiating license agreements in the shadow of FRAND litigation:
    The critical question is whether there is a realistic threat of litigation to constrain the royalties commanded by Qualcomm (we believe that Lemley et al. agree with us on this point).”
  • However, Qualcomm’s behavior did not preclude OEMs from pursuing this type of strategy:
    We believe the following facts support our assertion:
    OEMs have pursued various litigation strategies in order to obtain lower rates on Qualcomm’s IP. […]
    For the most part, Qualcomm’s threats to cut off chip supplies were just that: threats. […]
    OEMs also wield powerful threats. […]
    Qualcomm’s chipsets might no longer be “must-buys” in the future.”

 5. Amici’s surrebuttal

The amici sent us a final response (reproduced here in full) :

In their original post, Manne and Auer argued that the antitrust argument against Qualcomm’s no license/no chips policy was based on bad economics and bad law.  They now seem to have abandoned that argument and claim instead – contrary to the extensive factual findings of the district court – that, while Qualcomm threatened to cut off chips, it was a paper tiger that OEMs could, and knew they could, ignore.  The implication is that the Ninth Circuit should affirm the district court on the no license/ no chips issue unless it sets aside the court’s fact findings.  That seems like agreement with the position of our amicus brief.

We will not in this post review the huge factual record.  We do note, however, that Manne and Auer cite in support of their factual argument only that 3 industry giants brought and then settled litigation against Qualcomm.  But all 3 brought antitrust litigation; their doing so hardly proves that contract litigation or what Manne and Auer call “holdout” were viable options for anyone, much less for smaller OEMs.  The fact that Qualcomm found it necessary to actually cut off only one OEM – and then it only took the OEM only 7 days to capitulate – certainly does not prove that Qualcomm’s threats lacked credibility.   Notably, Manne and Auer do not claim that any OEMs bought chips from competitors of Qualcomm (although Apple bought some chips from Intel for a short while). No license/no chips appears to have been a successful, coercive policy, not an easily ignored threat.                                                                                                                                              

6. Concluding remarks

First and foremost, we would like to thank the Amici for thoughtfully engaging with us. This is what the law & economics tradition is all about: moving the ball forward by taking part in vigorous, multidisciplinary, debates.

With that said, we do feel compelled to leave readers with two short remarks. 

First, contrary to what the amici claim, we believe that our position has remained the same throughout these debates. 

Second, and more importantly, we think that everyone agrees that the critical question is whether OEMs were prevented from negotiating licenses in the shadow of FRAND litigation. 

We leave it up to Truth on the Market readers to judge which side of this debate is correct.

[This guest post is authored by Mark A. Lemley, Professor of Law and the Director of Program in Law, Science & Technology at Stanford Law School; A. Douglas Melamed, Professor of the Practice of Law at Stanford Law School and Former Senior Vice President and General Counsel of Intel from 2009 to 2014; and Steven Salop, Professor of Economics and Law at Georgetown Law School. It is part of an ongoing debate between the authors, on one side, and Geoffrey Manne and Dirk Auer, on the other, and has been integrated into our ongoing series on the FTC v. Qualcomm case, where all of the posts in this exchange are collected.]

In their original post, Manne and Auer argued that the antitrust argument against Qualcomm’s no license/no chips policy was based on bad economics and bad law. They now seem to have abandoned that argument and claim instead – contrary to the extensive factual findings of the district court – that, while Qualcomm threatened to cut off chips, it was a paper tiger that OEMs could, and knew they could, ignore. The implication is that the Ninth Circuit should affirm the district court on the no license/ no chips issue unless it sets aside the court’s fact findings. That seems like agreement with the position of our amicus brief.

We will not in this post review the huge factual record. We do note, however, that Manne and Auer cite in support of their factual argument only that 3 industry giants brought and then settled litigation against Qualcomm. But all 3 brought antitrust litigation; their doing so hardly proves that contract litigation or what Manne and Auer call “holdout” were viable options for anyone, much less for smaller OEMs. The fact that Qualcomm found it necessary to actually cut off only one OEM – and then it only took the OEM only 7 days to capitulate – certainly does not prove that Qualcomm’s threats lacked credibility. Notably, Manne and Auer do not claim that any OEMs bought chips from competitors of Qualcomm (although Apple bought some chips from Intel for a short while). No license/no chips appears to have been a successful, coercive policy, not an easily ignored threat.

Last week, we posted a piece on TOTM, criticizing the amicus brief written by Mark Lemley, Douglas Melamed and Steven Salop in the ongoing Qualcomm litigation. The authors prepared a thoughtful response to our piece, which we published today on TOTM. 

In this post, we highlight the points where we agree with the amici (or at least we think so), as well as those where we differ.

Negotiating in the shadow of FRAND litigation

Let us imagine a hypothetical world, where an OEM must source one chipset from Qualcomm (i.e. this segment of the market is non-contestable) and one chipset from either Qualcomm or its  rivals (i.e. this segment is contestable). For both of these chipsets, the OEM must also reach a license agreement with Qualcomm.

We use the same number as the amici: 

  • The OEM has a reserve price of $20 for each chip/license combination. 
  • Rivals can produce chips at a cost of $11. 
  • The hypothetical FRAND benchmark is $2 per chip. 

With these numbers in mind, the critical question is whether there is a realistic threat of litigation to constrain the royalties commanded by Qualcomm (we believe that Lemley et al. agree with us on this point). The following table shows the prices that a hypothetical OEM would be willing to pay in both of these scenarios:

Blue cells are segments where QC can increase its profits if the threat of litigation is removed.

When the threat of litigation is present, Qualcomm obtains a total of $20 for the combination of non-contestable chips and IP. Qualcomm can use its chipset position to evade FRAND and charges the combined monopoly price of $20. At a chipset cost of $11, it would thus make $9 worth of profits. However, it earns only $13 for contestable chips ($2 in profits). This is because competition brings the price of chips down to $11 and Qualcomm does not have a chipset advantage to earn more than the FRAND rate for its IP.

When the threat of litigation is taken off the table, all chipsets effectively become non-contestable. Qualcomm still earns $20 for its previously non-contestable chips. But it can now raise its IP rate above the FRAND benchmark in the previously contestable segment (for example, by charging $10 for the IP). This squeezes its chipset competitors.

If our understanding of the amici’s response is correct, they argue that the combination of Qualcomm’s strong chipset position and its “No License, No Chips” policy (“NLNC”) effectively nullifies the threat of litigation:

Qualcomm is able to charge more than $2 for the license only because it uses the power of its chip monopoly to coerce the OEMs to give up the option of negotiating in light of the otherwise applicable constraints on the royalties it can charge. 

According to the amici, the market thus moves from a state of imperfect competition (where OEMs would pay $33 for two chips and QC’s license) to a world of monopoly (where they pay the full $40).

We beg to differ. 

Our points of disagreement

From an economic standpoint, the critical question is the extent to which Qualcomm’s chipset position and its NLNC policy deter OEMs from obtaining closer-to-FRAND rates.

While the case record is mixed and contains some ambiguities, we think it strongly suggests that Qualcomm’s chipset position and its NLNC policy do not preclude OEMs from using litigation to obtain rates that are close to the FRAND benchmark. There is thus no reason to believe that it can exclude its chipset rivals.

We believe the following facts support our assertion:

  • OEMs have pursued various litigation strategies in order to obtain lower rates on Qualcomm’s IP. As we mentioned in our previous post, this was notably the case for Apple, Samsung and LG. All three companies ultimately reached settlements with Qualcomm (and these settlements were concluded in the shadow of litigation proceedings — indeed, in Apple’s case, on the second day of trial). If anything, this suggests that court proceedings are an integral part of negotiations between Qualcomm and its OEMs.
  • For the most part, Qualcomm’s threats to cut off chip supplies were just that: threats. In any negotiation, parties will try to convince their counterpart that they have a strong outside option. Qualcomm may have done so by posturing that it would not sell chips to OEMs before they concluded a license agreement. 

    However, it seems that only once did Qualcomm apparently follow through with its threats to withhold chips (against Sony). And even then, the supply cutoff lasted only seven days.

    And while many OEMs did take Qualcomm to court in order to obtain more favorable license terms, this never resulted in Qualcomm cutting off their chipset supplies. Other OEMs thus had no reason to believe that litigation would entail disruptions to their chipset supplies.
  • OEMs also wield powerful threats. These include patent holdout, litigation, vertical integration, and purchasing chips from Qualcomm’s rivals. And of course they have aggressively pursued the bringing of this and other litigation around the world by antitrust authorities — even quite possibly manipulating the record to bolster their cases. Here’s how one observer sums up Apple’s activity in this regard:

    “Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.

    Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm.” (Emphasis added)

    Moreover, the holdout and litigation paths have been strengthened by the eBay case, which significantly reduced the financial risks involved in pursuing a holdout and/or litigation strategy. Given all of this, it is far from obvious that it is Qualcomm who enjoys the stronger bargaining position here.
  • Qualcomm’s chipsets might no longer be “must-buys” in the future. Rivals have gained increasing traction over the past couple of years. And with 5G just around the corner, this momentum could conceivably accelerate. Whether or not one believes that this will ultimately be the case, the trend surely places additional constraints on Qualcomm’s conduct. Aggressive behavior today may spur disgruntled rivals to enter the chipset market or switch suppliers tomorrow.

To summarize, as we understand their response, the delta between supracompetitive and competitive prices is entirely a function of Qualcomm’s ability to charge supra-FRAND prices for its licenses. On this we agree. But, unlike Lemley et al., we do not agree that Qualcomm is in a position to evade its FRAND pledges by using its strong position in the chipset market and its NLNC policy.

Finally, it must be said again: To the extent that that is the problem — the charging of supra-FRAND prices for licenses — the issue is manifestly a contract issue, not an antitrust one. All of the complexity of the case would fall away, and the litigation would be straightforward. But the opponents of Qualcomm’s practices do not really want to ensure that Qualcomm lowers its royalties by this delta; if they did, they would be bringing/supporting FRAND litigation. What the amici and Qualcomm’s contracting partners appear to want is to use antitrust litigation to force Qualcomm to license its technology at even lower rates — to force Qualcomm into a different business model in order to reset the baseline from which FRAND prices are determined (i.e., at the chip level, rather than at the device level). That may be an intelligible business strategy from the perspective of Qualcomm’s competitors, but it certainly isn’t sensible antitrust policy.

[This guest post is authored by Mark A. Lemley, Professor of Law and the Director of Program in Law, Science & Technology at Stanford Law School; A. Douglas Melamed, Professor of the Practice of Law at Stanford Law School and Former Senior Vice President and General Counsel of Intel from 2009 to 2014; and Steven Salop, Professor of Economics and Law at Georgetown Law School. It is a response to the post, “Exclusionary Pricing Without the Exclusion: Unpacking Qualcomm’s No License, No Chips Policy,” by Geoffrey Manne and Dirk Auer, which is itself a response to Lemley, Melamed, and Salop’s amicus brief in FTC v. Qualcomm.]

Geoffrey Manne and Dirk Auer’s defense of Qualcomm’s no license/no chips policy is based on a fundamental misunderstanding of how that policy harms competition.  The harm is straightforward in light of facts proven at trial. In a nutshell, OEMs must buy some chips from Qualcomm or else exit the handset business, even if they would also like to buy additional chips from other suppliers. OEMs must also buy a license to Qualcomm’s standard essential patents, whether they use Qualcomm’s chips or other chips implementing the same industry standards. There is a monopoly price for the package of Qualcomm’s chips plus patent license. Assume that the monopoly price is $20. Assume further that, if Qualcomm’s patents were licensed in a standalone transaction, as they would be if they were owned by a firm that did not also make chips, the market price for the patent license would be $2. In that event, the monopoly price for the chip would be $18, and a chip competitor could undersell Qualcomm if Qualcomm charged the monopoly price of $18 and the competitor could profitably sell chips for a lower price. If the competitor’s cost of producing and selling chips was $11, for example, it could easily undersell Qualcomm and force Qualcomm to lower its chip prices below $18, thereby reducing the price for the package to a level below $20.

However, the no license/no chips policy enables Qualcomm to allocate the package price of $20 any way it wishes. Because the OEMs must buy some chips from Qualcomm, Qualcomm is able to coerce the OEMs to accept any such allocation by threatening not to sell them chips if they do not agree to a license at the specified terms. The prices could thus be $18 and $2; or, for example, they could be $10 for the chips and $10 for the license. If Qualcomm sets the license price at $10 and a chip price of $10, it would continue to realize the monopoly package price of $20. But in that case, a competitor could profitably undersell Qualcomm only if its chip cost were less than 10. A competitor with a cost of $11 would then not be able to successfully enter the market, and Qualcomm would not need to lower its chip prices. That is how the no license/no chip policy blocks entry of chip competitors and maintains Qualcomm’s chip monopoly. 

Manne and Auer’s defense of the no license/no chips policy is deeply flawed. In the first place, Manne and Auer mischaracterize the problem as one in which “Qualcomm undercuts [chipset rivals] on chip prices and recoups its losses by charging supracompetitive royalty rates on its IP.” On the basis of this description of the issue, they argue that, if Qualcomm cannot charge more than $2 for the license, it cannot use license revenues to offset the chip price reduction. And if Qualcomm can charge more than $2 for the license, it does not need a chip monopoly in order to make supracompetitive licensing profits. This argument is wrong both factually and conceptually.  

As a factual matter, there are constraints on Qualcomm’s ability to charge more than $2 for the license if the license is sold by itself. If sold by itself, the license would be negotiated in the shadow of infringement litigation and the royalty would be constrained by the value of the technology claimed by the patent, the risk that the patent would be found to be invalid or not infringed, the “reasonable royalty” contemplated by the patent laws, and the contractual commitment to license on FRAND terms. But Qualcomm is able to circumvent those constraints by coercing OEMs to pay a higher price or else lose access to essential Qualcomm chips. In other words, Qualcomm’s ability to charge more than $2 for the license is not exogenous. Qualcomm is able to charge more than $2 for the license only because it uses the power of its chip monopoly to coerce the OEMs to give up the option of negotiating in light of the otherwise applicable constraints on the royalties it can charge. It is a simple story of bundling with simultaneous recoupment.  

As a conceptual matter, Manne and Auer seem to think that the concern with the no license/no chips policy is that it enables inflated patent royalties to subsidize a profit sacrifice in chip sales, as if the issue were predatory pricing in chips.  But there is no such sacrifice. Money is fungible, and Manne and Auer have it backwards. The problem is that the no license/no chips policy enables Qualcomm to make purely nominal changes by allocating some of its monopoly chip price to the license price. Qualcomm offsets that nominal license price increase when the OEM buys chips from it by lowering the chip price by that amount in order to maintain the package price at the monopoly price.  There is no profit sacrifice for Qualcomm because the lower chip price simply offsets the higher license price. Qualcomm offers no such offset when the OEM buys chips from other suppliers. To the contrary, by using its chip monopoly to increase the license price, it increases the cost to OEMs of using competitors’ chips and is thus able to perpetuate its chip monopoly and maintain its monopoly chip prices and profits. Absent this policy, OEMs would buy more chips from third parties; Qualcomm’s prices and profits would fall; and consumers would benefit.

At the end of the day, Manne and Auer rely on the old “single monopoly profit” or “double counting” idea that a monopolist cannot both charge a monopoly price and extract additional consideration as well. But, again, they have it backwards. Manne and Auer describe the issue as whether Qualcomm can leverage its patent position in the technology markets to increase its market power in chips. But that is not the issue. Qualcomm is not trying to increase profits by leveraging monopoly power from one market into a different market in order to gain additional monopoly profits in the second market. Instead, it is using its existing monopoly power in chips to maintain that monopoly power in the first place. Assuming Qualcomm has a chip monopoly, it is true that it earns the same revenue from OEMs regardless of how it allocates the all-in price of $20 to its chips versus its patents. But by allocating more of the all-in price to the patents (i.e., in our example, $10 instead of $2), Qualcomm is able to maintain its monopoly by preventing rival chipmakers from undercutting the $20 monopoly price of the package. That is how competition and consumers are harmed.

This is the second in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here). It draws on research from a soon-to-be published ICLE white paper.

(Left, Android 10 Website; Right, iOS 13 Website)

In a previous post, I argued that the Commission failed to adequately define the relevant market in its recently published Google Android decision

This improper market definition might not be so problematic if the Commission had then proceeded to undertake a detailed (and balanced) assessment of the competitive conditions that existed in the markets where Google operates (including the competitive constraints imposed by Apple). 

Unfortunately, this was not the case. The following paragraphs respond to some of the Commission’s most problematic arguments regarding the existence of barriers to entry, and the absence of competitive constraints on Google’s behavior.

The overarching theme is that the Commission failed to quantify its findings and repeatedly drew conclusions that did not follow from the facts cited. As a result, it was wrong to conclude that Google faced little competitive pressure from Apple and other rivals.

1. Significant investments and network effects ≠ barriers to entry

In its decision, the Commission notably argued that significant investments (millions of euros) are required to set up a mobile OS and App store. It also argued that market for licensable mobile operating systems gave rise to network effects. 

But contrary to the Commission’s claims, neither of these two factors is, in and of itself, sufficient to establish the existence of barriers to entry (even under EU competition law’s loose definition of the term, rather than Stigler’s more technical definition)

Take the argument that significant investments are required to enter the mobile OS market.

The main problem is that virtually every market requires significant investments on the part of firms that seek to enter. Not all of these costs can be seen as barriers to entry, or the concept would lose all practical relevance. 

For example, purchasing a Boeing 737 Max airplane reportedly costs at least $74 million. Does this mean that incumbents in the airline industry are necessarily shielded from competition? Of course not. 

Instead, the relevant question is whether an entrant with a superior business model could access the capital required to purchase an airplane and challenge the industry’s incumbents.

Returning to the market for mobile OSs, the Commission should thus have questioned whether as-efficient rivals could find the funds required to produce a mobile OS. If the answer was yes, then the investments highlighted by the Commission were largely immaterial. As it happens, several firms have indeed produced competing OSs, including CyanogenMod, LineageOS and Tizen.

The same is true of Commission’s conclusion that network effects shielded Google from competitors. While network effects almost certainly play some role in the mobile OS and app store markets, it does not follow that they act as barriers to entry in competition law terms. 

As Paul Belleflamme recently argued, it is a myth that network effects can never be overcome. And as I have written elsewhere, the most important question is whether users could effectively coordinate their behavior and switch towards a superior platform, if one arose (See also Dan Spulber’s excellent article on this point).

The Commission completely ignored this critical interrogation during its discussion of network effects.

2. The failure of competitors is not proof of barriers to entry

Just as problematically, the Commission wrongly concluded that the failure of previous attempts to enter the market was proof of barriers to entry. 

This is the epitome of the Black Swan fallacy (i.e. inferring that all swans are white because you have never seen a relatively rare, but not irrelevant, black swan).

The failure of rivals is equally consistent with any number of propositions: 

  • There were indeed barriers to entry; 
  • Google’s products were extremely good (in ways that rivals and the Commission failed to grasp); 
  • Google responded to intense competitive pressure by continuously improving its product (and rivals thus chose to stay out of the market); 
  • Previous rivals were persistently inept (to take the words of Oliver Williamson); etc. 

The Commission did not demonstrate that its own inference was the right one, nor did it even demonstrate any awareness that other explanations were at least equally plausible.

3. First mover advantage?

Much of the same can be said about the Commission’s observation that Google enjoyed a first mover advantage

The elephant in the room is that Google was not the first mover in the smartphone market (and even less so in the mobile phone industry). The Commission attempted to sidestep this uncomfortable truth by arguing that Google was the first mover in the Android app store market. It then concluded that Google had an advantage because users were familiar with Android’s app store.

To call this reasoning “naive” would be too kind. Maybe consumers are familiar with Google’s products today, but they certainly weren’t when Google entered the market. 

Why would something that did not hinder Google (i.e. users’ lack of familiarity with its products, as opposed to those of incumbents such as Nokia or Blackberry) have the opposite effect on its future rivals? 

Moreover, even if rivals had to replicate Android’s user experience (and that of its app store) to prove successful, the Commission did not show that there was anything that prevented them from doing so — a particularly glaring omission given the open-source nature of the Android OS.

The result is that, at best, the Commission identified a correlation but not causality. Google may arguably have been the first, and users might have been more familiar with its offerings, but this still does not prove that Android flourished (and rivals failed) because of this.

4. It does not matter that users “do not take the OS into account” when they purchase a device

The Commission also concluded that alternatives to Android (notably Apple’s iOS and App Store) exercised insufficient competitive constraints on Google. Among other things, it argued that this was because users do not take the OS into account when they purchase a smartphone (so Google could allegedly degrade Android without fear of losing users to Apple)..

In doing so, the Commission failed to grasp that buyers might base their purchases on a devices’ OS without knowing it.

Some consumers will simply follow the advice of a friend, family member or buyer’s guide. Acutely aware of their own shortcomings, they thus rely on someone else who does take the phone’s OS into account. 

But even when they are acting independently, unsavvy consumers may still be driven by technical considerations. They might rely on a brand’s reputation for providing cutting edge devices (which, per the Commission, is the most important driver of purchase decisions), or on a device’s “feel” when they try it in a showroom. In both cases, consumers’ choices could indirectly be influenced by a phone’s OS.

In more technical terms, a phone’s hardware and software are complementary goods. In these settings, it is extremely difficult to attribute overall improvements to just one of the two complements. For instance, a powerful OS and chipset are both equally necessary to deliver a responsive phone. The fact that consumers may misattribute a device’s performance to one of these two complements says nothing about their underlying contribution to a strong end-product (which, in turn, drives purchase decisions). Likewise, battery life is reportedly one of the most important features for users, yet few realize that a phone’s OS has a large impact on it.

Finally, if consumers were really indifferent to the phone’s operating system, then the Commission should have dropped at least part of its case against Google. The Commission’s claim that Google’s anti-fragmentation agreements harmed consumers (by reducing OS competition) has no purchase if Android is provided free of charge and consumers are indifferent to non-price parameters, such as the quality of a phone’s OS. 

5. Google’s users were not “captured”

Finally, the Commission claimed that consumers are loyal to their smartphone brand and that competition for first time buyers was insufficient to constrain Google’s behavior against its “captured” installed base.

It notably found that 82% of Android users stick with Android when they change phones (compared to 78% for Apple), and that 75% of new smartphones are sold to existing users. 

The Commission asserted, without further evidence, that these numbers proved there was little competition between Android and iOS.

But is this really so? In almost all markets consumers likely exhibit at least some loyalty to their preferred brand. At what point does this become an obstacle to interbrand competition? The Commission offered no benchmark mark against which to assess its claims.

And although inter-industry comparisons of churn rates should be taken with a pinch of salt, it is worth noting that the Commission’s implied 18% churn rate for Android is nothing out of the ordinary (see, e.g., here, here, and here), including for industries that could not remotely be called anticompetitive.

To make matters worse, the Commission’s own claimed figures suggest that a large share of sales remained contestable (roughly 39%).

Imagine that, every year, 100 devices are sold in Europe (75 to existing users and 25 to new users, according to the Commission’s figures). Imagine further that the installed base of users is split 76–24 in favor of Android. Under the figures cited by the Commission, it follows that at least 39% of these sales are contestable.

According to the Commission’s figures, there would be 57 existing Android users (76% of 75) and 18 Apple users (24% of 75), of which roughly 10 (18%) and 4 (22%), respectively, switch brands in any given year. There would also be 25 new users who, even according to the Commission, do not display brand loyalty. The result is that out of 100 purchasers, 25 show no brand loyalty and 14 switch brands. And even this completely ignores the number of consumers who consider switching but choose not to after assessing the competitive options.

Conclusion

In short, the preceding paragraphs argue that the Commission did not meet the requisite burden of proof to establish Google’s dominance. Of course, it is one thing to show that the Commission’s reasoning was unsound (it is) and another to establish that its overall conclusion was wrong.

At the very least, I hope these paragraphs will convey a sense that the Commission loaded the dice, so to speak. Throughout the first half of its lengthy decision, it interpreted every piece of evidence against Google, drew significant inferences from benign pieces of information, and often resorted to circular reasoning.

The following post in this blog series argues that these errors also permeate the Commission’s analysis of Google’s allegedly anticompetitive behavior.

Wall Street Journal commentator, Greg Ip, reviews Thomas Philippon’s forthcoming book, The Great Reversal: How America Gave Up On Free Markets. Ip describes a “growing mountain” of research on industry concentration in the U.S. and reports that Philippon concludes competition has declined over time, harming U.S. consumers.

In one example, Philippon points to air travel. He notes that concentration in the U.S. has increased rapidly—spiking since the Great Recession—while concentration in the EU has increased modestly. At the same time, Ip reports “U.S. airlines are now far more profitable than their European counterparts.” (Although it’s debatable whether a five percentage point difference in net profit margin is “far more profitable”). 

On first impression, the figures fit nicely with the populist antitrust narrative: As concentration in the U.S. grew, so did profit margins. Closer inspection raises some questions, however. 

For example, the U.S. airline industry had a negative net profit margin in each of the years prior to the spike in concentration. While negative profits may be good for consumers, it would be a stretch to argue that long-run losses are good for competition as a whole. At some point one or more of the money losing firms is going to pull the ripcord. Which raises the issue of causation.

Just looking at the figures from the WSJ article, one could argue that rather than concentration driving profit margins, instead profit margins are driving concentration. Indeed, textbook IO economics would indicate that in the face of losses, firms will exit until economic profit equals zero. Paraphrasing Alfred Marshall, “Which blade of the scissors is doing the cutting?”

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to Philippon’s conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.

Regressing U.S. air fare price index against Philippon’s concentration information in the figure above (and controlling for general inflation) finds that if U.S. concentration in 2015 was the same as in 1995, U.S. airfares would be about 2.8% lower. That a 1,250 point increase in HHI would be associated with a 2.8% increase in prices indicates that the increased concentration in U.S. airlines has led to no significant increase in consumer prices.

Also, if consumers are truly worse off, one would expect to see a drop off or slow down in the use of air travel. An eyeballing of passenger data does not fit the populist narrative. Instead, we see airlines are carrying more passengers and consumers are paying lower prices on average.

While it’s true that low-cost airlines have shaken up air travel in the EU, the differences are not solely explained by differences in market concentration. For example, U.S. regulations prohibit foreign airlines from operating domestic flights while EU carriers compete against operators from other parts of Europe. While the WSJ’s figures tell an interesting story of concentration, prices, and profits, they do not provide a compelling case of anticompetitive conduct.