Archives For Vertical integration

Responding to a new draft policy statement from the U.S. Patent & Trademark Office (USPTO), the National Institute of Standards and Technology (NIST), and the U.S. Department of Justice, Antitrust Division (DOJ) regarding remedies for infringement of standard-essential patents (SEPs), a group of 19 distinguished law, economics, and business scholars convened by the International Center for Law & Economics (ICLE) submitted comments arguing that the guidance would improperly tilt the balance of power between implementers and inventors, and could undermine incentives for innovation.

As explained in the scholars’ comments, the draft policy statement misunderstands many aspects of patent and antitrust policy. The draft notably underestimates the value of injunctions and the circumstances in which they are a necessary remedy. It also overlooks important features of the standardization process that make opportunistic behavior much less likely than policymakers typically recognize. These points are discussed in even more detail in previous work by ICLE scholars, including here and here.

These first-order considerations are only the tip of the iceberg, however. Patent policy has a huge range of second-order effects that the draft policy statement and policymakers more generally tend to overlook. Indeed, reducing patent protection has more detrimental effects on economic welfare than the conventional wisdom typically assumes. 

The comments highlight three important areas affected by SEP policy that would be undermined by the draft statement. 

  1. First, SEPs are established through an industry-wide, collaborative process that develops and protects innovations considered essential to an industry’s core functioning. This process enables firms to specialize in various functions throughout an industry, rather than vertically integrate to ensure compatibility. 
  2. Second, strong patent protection, especially of SEPs, boosts startup creation via a broader set of mechanisms than is typically recognized. 
  3. Finally, strong SEP protection is essential to safeguard U.S. technology leadership and sovereignty. 

As explained in the scholars’ comments, the draft policy statement would be detrimental on all three of these dimensions. 

To be clear, the comments do not argue that addressing these secondary effects should be a central focus of patent and antitrust policy. Instead, the point is that policymakers must deal with a far more complex set of issues than is commonly recognized; the effects of SEP policy aren’t limited to the allocation of rents among inventors and implementers (as they are sometimes framed in policy debates). Accordingly, policymakers should proceed with caution and resist the temptation to alter by fiat terms that have emerged through careful negotiation among inventors and implementers, and which have been governed for centuries by the common law of contract. 

Collaborative Standard-Setting and Specialization as Substitutes for Proprietary Standards and Vertical Integration

Intellectual property in general—and patents, more specifically—is often described as a means to increase the monetary returns from the creation and distribution of innovations. While this is undeniably the case, this framing overlooks the essential role that IP also plays in promoting specialization throughout the economy.

As Ronald Coase famously showed in his Nobel-winning work, firms must constantly decide whether to perform functions in-house (by vertically integrating), or contract them out to third parties (via the market mechanism). Coase concluded that these decisions hinge on whether the transaction costs associated with the market mechanism outweigh the cost of organizing production internally. Decades later, Oliver Williamson added a key finding to this insight. He found that among the most important transaction costs that firms encounter are those that stem from incomplete contracts and the scope for opportunistic behavior they entail.

This leads to a simple rule of thumb: as the scope for opportunistic behavior increases, firms are less likely to use the market mechanism and will instead perform tasks in-house, leading to increased vertical integration.

IP plays a key role in this process. Patents drastically reduce the transaction costs associated with the transfer of knowledge. This gives firms the opportunity to develop innovations collaboratively and without fear that trading partners might opportunistically appropriate their inventions. In turn, this leads to increased specialization. As Robert Merges observes

Patents facilitate arms-length trade of a technology-intensive input, leading to entry and specialization.

More specifically, it is worth noting that the development and commercialization of inventions can lead to two important sources of opportunistic behavior: patent holdup and patent holdout. As the assembled scholars explain in their comments, while patent holdup has drawn the lion’s share of policymaker attention, empirical and anecdotal evidence suggest that holdout is the more salient problem.

Policies that reduce these costs—especially patent holdout—in a cost-effective manner are worthwhile, with the immediate result that technologies are more widely distributed than would otherwise be the case. Inventors also see more intense and extensive incentives to produce those technologies in the first place.

The Importance of Intellectual Property Rights for Startup Activity

Strong patent rights are essential to monetize innovation, thus enabling new firms to gain a foothold in the marketplace. As the scholars’ comments explain, this is even more true for startup companies. There are three main reasons for this: 

  1. Patent rights protected by injunctions prevent established companies from simply copying innovative startups, with the expectation that they will be able to afford court-set royalties; 
  2. Patent rights can be the basis for securitization, facilitating access to startup funding; and
  3. Patent rights drive venture capital (VC) investment.

While point (1) is widely acknowledged, many fail to recognize it is particularly important for startup companies. There is abundant literature on firms’ appropriability mechanisms (these are essentially the strategies firms employ to prevent rivals from copying their inventions). The literature tells us that patent protection is far from the only strategy firms use to protect their inventions (see. e.g., here, here and here). 

The alternative appropriability mechanisms identified by these studies tend to be easier to implement for well-established firms. For instance, many firms earn returns on their inventions by incorporating them into physical products that cannot be reverse engineered. This is much easier for firms that already have a large industry presence and advanced manufacturing capabilities.  In contrast, startup companies—almost by definition—must outsource production.

Second, property rights could drive startup activity through the collateralization of IP. By offering security interests in patents, trademarks, and copyrights, startups with little or no tangible assets can obtain funding without surrendering significant equity. As Gaétan de Rassenfosse puts it

SMEs can leverage their IP to facilitate R&D financing…. [P]atents materialize the value of knowledge stock: they codify the knowledge and make it tradable, such that they can be used as collaterals. Recent theoretical evidence by Amable et al. (2010) suggests that a systematic use of patents as collateral would allow a high growth rate of innovations despite financial constraints.

Finally, there is reason to believe intellectual-property protection is an important driver of venture capital activity. Beyond simply enabling firms to earn returns on their investments, patents might signal to potential investors that a company is successful and/or valuable. Empirical research by Hsu and Ziedonis, for instance, supports this hypothesis

[W]e find a statistically significant and economically large effect of patent filings on investor estimates of start-up value…. A doubling in the patent application stock of a new venture [in] this sector is associated with a 28 percent increase in valuation, representing an upward funding-round adjustment of approximately $16.8 million for the average start-up in our sample.

In short, intellectual property can stimulate startup activity through various mechanisms. There is thus a sense that, at the margin, weakening patent protection will make it harder for entrepreneurs to embark on new business ventures.

The Role of Strong SEP Rights in Guarding Against China’s ‘Cyber Great Power’ Ambitions 

The United States, due in large measure to its strong intellectual-property protections, is a nation of innovators, and its production of IP is one of its most important comparative advantages. 

IP and its legal protections become even more important, however, when dealing with international jurisdictions, like China, that don’t offer similar levels of legal protection. By making it harder for patent holders to obtain injunctions, licensees and implementers gain the advantage in the short term, because they are able to use patented technology without having to engage in negotiations to pay the full market price. 

In the case of many SEPs—particularly those in the telecommunications sector—a great many patent holders are U.S.-based, while the lion’s share of implementers are Chinese. The anti-injunction policy espoused in the draft policy statement thus amounts to a subsidy to Chinese infringers of U.S. technology.

At the same time, China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights.

This is part of the Chinese government’s larger approach to industrial policy, which seeks to expand Chinese power in international trade negotiations and in global standards bodies. As one Chinese Communist Party official put it

Standards are the commanding heights, the right to speak, and the right to control. Therefore, the one who obtains the standards gains the world.

Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.

The scholars convened by ICLE were not alone in voicing these fears. David Teece (also a signatory to the ICLE-convened comments), for example, surmises in his comments that: 

The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation…. Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.

Similarly, comments from the Center for Strategic and International Studies (signed by, among others, former USPTO Director Anrei Iancu, former NIST Director Walter Copan, and former Deputy Secretary of Defense John Hamre) argue that the draft policy statement would benefit Chinese firms at U.S. firms’ expense:

What is more, the largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.

With Chinese authorities joining standardization bodies and increasingly claiming jurisdiction over F/RAND disputes, there should be careful reevaluation of the ways the draft policy statement would further weaken the United States’ comparative advantage in IP-dependent technological innovation. 

Conclusion

In short, weakening patent protection could have detrimental ramifications that are routinely overlooked by policymakers. These include increasing inventors’ incentives to vertically integrate rather than develop innovations collaboratively; reducing startup activity (especially when combined with antitrust enforcers’ newfound proclivity to challenge startup acquisitions); and eroding America’s global technology leadership, particularly with respect to China.

For these reasons (and others), the text of the draft policy statement should be reconsidered and either revised substantially to better reflect these concerns or withdrawn entirely. 

The signatories to the comments are:

Alden F. AbbottSenior Research Fellow, Mercatus Center
George Mason University
Former General Counsel, U.S. Federal Trade Commission
Jonathan BarnettTorrey H. Webb Professor of Law
University of Southern California
Ronald A. CassDean Emeritus, School of Law
Boston University
Former Commissioner and Vice-Chairman, U.S. International Trade Commission
Giuseppe ColangeloJean Monnet Chair in European Innovation Policy and Associate Professor of Competition Law & Economics
University of Basilicata and LUISS (Italy)
Richard A. EpsteinLaurence A. Tisch Professor of Law
New York University
Bowman HeidenExecutive Director, Tusher Initiative at the Haas School of Business
University of California, Berkeley
Justin (Gus) HurwitzProfessor of Law
University of Nebraska
Thomas A. LambertWall Chair in Corporate Law and Governance
University of Missouri
Stan J. LiebowitzAshbel Smith Professor of Economics
University of Texas at Dallas
John E. LopatkaA. Robert Noll Distinguished Professor of Law
Penn State University
Keith MallinsonFounder and Managing Partner
WiseHarbor
Geoffrey A. MannePresident and Founder
International Center for Law & Economics
Adam MossoffProfessor of Law
George Mason University
Kristen Osenga Austin E. Owen Research Scholar and Professor of Law
University of Richmond
Vernon L. SmithGeorge L. Argyros Endowed Chair in Finance and Economics
Chapman University
Nobel Laureate in Economics (2002)
Daniel F. SpulberElinor Hobbs Distinguished Professor of International Business
Northwestern University
David J. TeeceThomas W. Tusher Professor in Global Business
University of California, Berkeley
Joshua D. WrightUniversity Professor of Law
George Mason University
Former Commissioner, U.S. Federal Trade Commission
John M. YunAssociate Professor of Law
George Mason University
Former Acting Deputy Assistant Director, Bureau of Economics, U.S. Federal Trade Commission 

There has been a wave of legislative proposals on both sides of the Atlantic that purport to improve consumer choice and the competitiveness of digital markets. In a new working paper published by the Stanford-Vienna Transatlantic Technology Law Forum, I analyzed five such bills: the EU Digital Services Act, the EU Digital Markets Act, and U.S. bills sponsored by Rep. David Cicilline (D-R.I.), Rep. Mary Gay Scanlon (D-Pa.), Sen. Amy Klobuchar (D-Minn.) and Sen. Richard Blumenthal (D-Conn.). I concluded that all those bills would have negative and unaddressed consequences in terms of information privacy and security.

In this post, I present the main points from the working paper regarding two regulatory solutions: (1) mandating interoperability and (2) mandating device neutrality (which leads to a possibility of sideloading applications, a special case of interoperability.) The full working paper  also covers the risks of compulsory data access (by vetted researchers or by authorities).

Interoperability

Interoperability is increasingly presented as a potential solution to some of the alleged problems associated with digital services and with large online platforms, in particular (see, e.g., here and here). For example, interoperability might allow third-party developers to offer different “flavors” of social-media newsfeeds, with varying approaches to content ranking and moderation. This way, it might matter less than it does now what content moderation decisions Facebook or other platforms make. Facebook users could choose alternative content moderators, delivering the kind of news feed that those users expect.

The concept of interoperability is popular not only among thought leaders, but also among legislators. The DMA, as well as the U.S. bills by Rep. Scanlon, Rep. Cicilline, and Sen. Klobuchar, all include interoperability mandates.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. It is telling that supporters of interoperability mandates use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns, due to its prioritization of interoperability (see, e.g., here).

Using currently available technology to provide alternative interfaces or moderation services for social-media platforms, third-party developers would have to be able to access much of the platform content that is potentially available to a user. This would include not just content produced by users who explicitly agree to share their data with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It does not require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

There are several constraints for interoperability frameworks that must be in place to safeguard privacy and security effectively.

First, solutions should be targeted toward real users of digital services, without assuming away some common but inconvenient characteristics. In particular, solutions should not assume unrealistic levels of user interest and technical acumen.

Second, solutions must address the issue of effective enforcement. Even the best information privacy and security laws do not, in and of themselves, solve any problems. Such rules must be followed, which requires addressing the problems of procedure and enforcement. In both the EU and the United States, the current framework and practice of privacy law enforcement offers little confidence that misuses of broadly construed interoperability would be detected and prosecuted, much less that they would be prevented. This is especially true for smaller and “judgment-proof” rulebreakers, including those from foreign jurisdictions.

If the service providers are placed under a broad interoperability mandate with non-discrimination provisions (preventing effective vetting of third parties, unilateral denials of access, and so on), then the burden placed on law enforcement will be mammoth. Just one bad actor, perhaps working from Russia or North Korea, could cause immense damage by taking advantage of interoperability mandates to exfiltrate user data or to execute a hacking (e.g., phishing) campaign. Of course, such foreign bad actors would be in violation of the EU GDPR, but that is unlikely to have any practical significance.

It would not be sufficient to allow (or require) service providers to enforce merely technical filters, such as a requirement to check whether the interoperating third parties’ IP address comes from a jurisdiction with sufficient privacy protections. Working around such technical limitations does not pose a significant difficulty to motivated bad actors.

Art 6(1) of the original DMA proposal included some general interoperability provisions applicable to “gatekeepers”—i.e., the largest online platforms. Those interoperability mandates were somewhat limited – applying only to “ancillary services” (e.g., payment or identification services) or requiring only one-way data portability. However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately.

The drafts of the DMA adopted by the European Council and by the European Parliament attempt to address that, but they only allow gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament). This standard may be too high and could push gatekeepers to offer lower security to avoid liability for adopting measures that would be judged by EU institutions and the courts as going beyond what is strictly necessary or indispensable.

The more recent DMA proposal from the European Parliament goes significantly beyond the original proposal, mandating full interoperability of a number of “independent interpersonal communication services” and of social-networking services. The Parliament’s proposals are good examples of overly broad and irresponsible interoperability mandates. They would cover “any providers” wanting to interconnect with gatekeepers, without adequate vetting. The safeguard proviso mentioning “high level of security and personal data protection” does not come close to addressing the seriousness of the risks created by the mandate. Instead of facing up to the risks and ensuring that the mandate itself be limited in ways that minimize them, the proposal seems just to expect that the gatekeepers can solve the problems if they only “nerd harder.”

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

restrict or impede the capacity of a business user to access or interoperate with the same platform, operating system, hardware and software features that are available to the covered platform operator’s own products, services, or lines of business.

The language of the prohibition in Sen. Klobuchar’s American Innovation and Choice Online Act (AICOA) is similar (also in Section 2(b)(1)). Both ACIOA and AICOA allow for affirmative defenses that a service provider could use if sued under the statute. While those defenses mention privacy and security, they are narrow (“narrowly tailored, could not be achieved through a less discriminatory means, was nonpretextual, and was necessary”) and would not prevent service providers from incurring significant litigation costs. Hence, just like the provisions of the DMA, they would heavily incentivize covered service providers not to adopt the most effective protections of privacy and security.

Device Neutrality (Sideloading)

Article 6(1)(c) of the DMA contains specific provisions about “sideloading”—i.e., allowing installation of third-party software through alternative app stores other than the one provided by the manufacturer (e.g., Apple’s App Store for iOS devices). A similar express provision for sideloading is included in Sen. Blumenthal’s Open App Markets Act (Section 3(d)(2)). Moreover, the broad interoperability provisions in the other U.S. bills discussed above may also be interpreted to require permitting sideloading.

A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS). By taking away the choice of a walled garden environment, a sideloading mandate will effectively force users to use whatever alternative app stores are preferred by particular app developers. App developers would have strong incentive to set up their own app stores or to move their apps to app stores with the least friction (for developers, not users), which would also mean the least privacy and security scrutiny.

This is not to say that Apple’s app scrutiny is perfect, but it is reasonable for an ordinary user to prefer Apple’s approach because it provides greater security (see, e.g., here and here). Thus, a legislative choice to override the revealed preference of millions of users for a “walled garden” approach should not be made lightly. 

Privacy and security safeguards in the DMA’s sideloading provisions, as amended by the European Council and by the European Parliament, as well as in Sen. Blumenthal’s Open App Markets Act, share the same problem of narrowness as the safeguards discussed above.

There is a more general privacy and security issue here, however, that those safeguards cannot address. The proposed sideloading mandate would prohibit outright a privacy and security-protection model that many users rationally choose today. Even with broader exemptions, this loss will be genuine. It is unclear whether taking away this choice from users is justified.

Conclusion

All the U.S. and EU legislative proposals considered here betray a policy preference of privileging uncertain and speculative competition gains at the expense of introducing a new and clear danger to information privacy and security. The proponents of these (or even stronger) legislative interventions seem much more concerned, for example, that privacy safeguards are “not abused by Apple and Google to protect their respective app store monopoly in the guise of user security” (source).

Given the problems with ensuring effective enforcement of privacy protections (especially with respect to actors coming from outside the EU, the United States, and other broadly privacy-respecting jurisdictions), the lip service paid by the legislative proposals to privacy and security is not much more than that. Policymakers should be expected to offer a much more detailed vision of concrete safeguards and mechanisms of enforcement when proposing rules that come with significant and entirely predictable privacy and security risks. Such vision is lacking on both sides of the Atlantic.

I do not want to suggest that interoperability is undesirable. The argument of this paper was focused on legally mandated interoperability. Firms experiment with interoperability all the time—the prevalence of open APIs on the Internet is testament to this. My aim, however, is to highlight that interoperability is complex and exposes firms and their users to potentially large-scale cyber vulnerabilities.

Generalized obligations on firms to open their data, or to create service interoperability, can short-circuit the private ordering processes that seek out those forms of interoperability and sharing that pass a cost-benefit test. The result will likely be both overinclusive and underinclusive. It would be overinclusive to require all firms in the regulated class to broadly open their services and data to all interested parties, even where it wouldn’t make sense for privacy, security, or other efficiency reasons. It is underinclusive in that the broad mandate will necessarily sap regulated firms’ resources and deter them from looking for new innovative uses that might make sense, but that are outside of the broad mandate. Thus, the likely result is less security and privacy, more expense, and less innovation.

The Senate Judiciary Committee is set to debate S. 2992, the American Innovation and Choice Online Act (or AICOA) during a markup session Thursday. If passed into law, the bill would force online platforms to treat rivals’ services as they would their own, while ensuring their platforms interoperate seamlessly.

The bill marks the culmination of misguided efforts to bring Big Tech to heel, regardless of the negative costs imposed upon consumers in the process. ICLE scholars have written about these developments in detail since the bill was introduced in October.

Below are 10 significant misconceptions that underpin the legislation.

1. There Is No Evidence that Self-Preferencing Is Generally Harmful

Self-preferencing is a normal part of how platforms operate, both to improve the value of their core products and to earn returns so that they have reason to continue investing in their development.

Platforms’ incentives are to maximize the value of their entire product ecosystem, which includes both the core platform and the services attached to it. Platforms that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product. Those that preference inferior products end up hurting their attractiveness to users of their “core” product, exposing themselves to competition from rivals.

As Geoff Manne concludes, the notion that it is harmful (notably to innovation) when platforms enter into competition with edge providers is entirely speculative. Indeed, a range of studies show that the opposite is likely true. Platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.

Consider a few examples from the empirical literature:

  1. Li and Agarwal (2017) find that Facebook’s integration of Instagram led to a significant increase in user demand both for Instagram itself and for the entire category of photography apps. Instagram’s integration with Facebook increased consumer awareness of photography apps, which benefited independent developers, as well as Facebook.
  2. Foerderer, et al. (2018) find that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally.
  3. Cennamo, et al. (2018) find that video games offered by console firms often become blockbusters and expand the consoles’ installed base. As a result, these games increase the potential for all independent game developers to profit from their games, even in the face of competition from first-party games.
  4. Finally, while Zhu and Liu (2018) is often held up as demonstrating harm from Amazon’s competition with third-party sellers on its platform, its findings are actually far from clear-cut. As co-author Feng Zhu noted in the Journal of Economics & Management Strategy: “[I]f Amazon’s entries attract more consumers, the expanded customer base could incentivize more third‐ party sellers to join the platform. As a result, the long-term effects for consumers of Amazon’s entry are not clear.”

2. Interoperability Is Not Costless

There are many things that could be interoperable, but aren’t. The reason not everything is interoperable is because interoperability comes with costs, as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds.

As Sam Bowman has observed, there are often costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen: consumers will choose products that are not interoperable.

In short, we cannot infer from the mere absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

3. Consumers Often Prefer Closed Ecosystems

Digital markets could have taken a vast number of shapes. So why have they gravitated toward the very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones?

Indeed, if recent commentary is to be believed, it is the latter that should succeed, because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into that breach. But this does not seem to be happening in the digital economy.

The naïve answer is to say that the absence of “open” systems is precisely the problem. What’s harder is to try to actually understand why. As I have written, there are many reasons that consumers might prefer “closed” systems, even when they have to pay a premium for them.

Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and on consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform.

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision.

They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire.

Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. What some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said it best when he quipped that economists always find a monopoly explanation for things that they simply fail to understand.

4. Data Portability Can Undermine Security and Privacy

As explained above, platforms that are more tightly controlled can be regulated by the platform owner to avoid some of the risks present in more open platforms. Apple’s App Store, for example, is a relatively closed and curated platform, which gives users assurance that apps will meet a certain standard of security and trustworthiness.

Along similar lines, there are privacy issues that arise from data portability. Even a relatively simple requirement to make photos available for download can implicate third-party interests. Making a user’s photos more broadly available may tread upon the privacy interests of friends whose faces appear in those photos. Importing those photos to a new service potentially subjects those individuals to increased and un-bargained-for security risks.

As Sam Bowman and Geoff Manne observe, this is exactly what happened with Facebook and its Social Graph API v1.0, ultimately culminating in the Cambridge Analytica scandal. Because v1.0 of Facebook’s Social Graph API permitted developers to access information about a user’s friends without consent, it enabled third-party access to data about exponentially more users. It appears that some 270,000 users granted data access to Cambridge Analytica, from which the company was able to obtain information on 50 million Facebook users.

In short, there is often no simple solution to implement interoperability and data portability. Any such program—whether legally mandated or voluntarily adopted—will need to grapple with these and other tradeoffs.

5. Network Effects Are Rarely Insurmountable

Several scholars in recent years have called for more muscular antitrust intervention in networked industries on grounds that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in and raise entry barriers for potential rivals (see here, here, and here). But there are countless counterexamples where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I wrote in April 2019 (a year before the COVID-19 pandemic):

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Geoff Manne and Alec Stapp have put forward a multitude of other examples,  including: the demise of Yahoo; the disruption of early instant-messaging applications and websites; and MySpace’s rapid decline. In all of these cases, outcomes did not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network-effects theory, they eviscerate the belief, common in antitrust circles, that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. The question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet, this question is systematically omitted from most policy discussions.

6. Profits Facilitate New and Exciting Platforms

As I wrote in August 2020, the relatively closed model employed by several successful platforms (notably Apple’s App Store, Google’s Play Store, and the Amazon Retail Platform) allows previously unknown developers/retailers to rapidly expand because (i) users do not have to fear their apps contain some form of malware and (ii) they greatly reduce payments frictions, most notably security-related ones.

While these are, indeed, tremendous benefits, another important upside seems to have gone relatively unnoticed. The “closed” business model also gives firms significant incentives to develop new distribution mediums (smart TVs spring to mind) and to improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.

The economics of two-sided markets are enlightening here. For example, Apple and Google’s app stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks.” That is, they compete aggressively (among themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users.

This dynamic gives firms significant incentive to continue to attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video, and games was one of the driving forces behind the launch of the iPad.

This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms, as would likely be the case under the American Innovation and Choice Online Act.

7. Large Market Share Does Not Mean Anticompetitive Outcomes

Scholars routinely cite the putatively strong concentration of digital markets to argue that Big Tech firms do not face strong competition. But this is a non sequitur. Indeed, as economists like Joseph Bertrand and William Baumol have shown, what matters is not whether markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, that alone will discipline incumbents’ behavior.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

Unfortunately, critics’ failure to meaningfully grapple with these issues serves to shape the “conventional wisdom” in tech-policy debates.

8. Vertical Integration Generally Benefits Consumers

Vertical behavior of digital firms—whether through mergers or through contract and unilateral action—frequently arouses the ire of critics of the current antitrust regime. Many such critics point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. But the findings of these few studies are regularly overstated and, even if taken at face value, represent a just minuscule fraction of the collected evidence, which overwhelmingly supports vertical integration.

There is strong and longstanding empirical evidence that vertical integration is competitively benign. This includes widely acclaimed work by economists Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama) and Margaret Slade, whose meta-analysis led them to conclude:

[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.

In short, there is a substantial body of both empirical and theoretical research showing that vertical integration (and the potential vertical discrimination and exclusion to which it might give rise) is generally beneficial to consumers. While it is possible that vertical mergers or discrimination could sometimes cause harm, the onus is on the critics to demonstrate empirically where this occurs. No legitimate interpretation of the available literature would offer a basis for imposing a presumption against such behavior.

9. There Is No Such Thing as Data Network Effects

Although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As was shown by the survey pf the empirical literature that Geoff Manne and I performed (published in the George Mason Law Review), data generally entails diminishing marginal returns:

Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around. Indeed, Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace.

Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

10.  Antitrust Enforcement Has Not Been Lax

The popular narrative has it that lax antitrust enforcement has led to substantially increased concentration, strangling the economy, harming workers, and expanding dominant firms’ profit margins at the expense of consumers. Much of the contemporary dissatisfaction with antitrust arises from a suspicion that overly lax enforcement of existing laws has led to record levels of concentration and a concomitant decline in competition. But both beliefs—lax enforcement and increased anticompetitive concentration—wither under more than cursory scrutiny.

As Geoff Manne observed in his April 2020 testimony to the House Judiciary Committee:

The number of Sherman Act cases brought by the federal antitrust agencies, meanwhile, has been relatively stable in recent years, but several recent blockbuster cases have been brought by the agencies and private litigants, and there has been no shortage of federal and state investigations. The vast majority of Section 2 cases dismissed on the basis of the plaintiff’s failure to show anticompetitive effect were brought by private plaintiffs pursuing treble damages; given the incentives to bring weak cases, it cannot be inferred from such outcomes that antitrust law is ineffective. But, in any case, it is highly misleading to count the number of antitrust cases and, using that number alone, to make conclusions about how effective antitrust law is. Firms act in the shadow of the law, and deploy significant legal resources to make sure they avoid activity that would lead to enforcement actions. Thus, any given number of cases brought could be just as consistent with a well-functioning enforcement regime as with an ill-functioning one.

The upshot is that naïvely counting antitrust cases (or the purported lack thereof), with little regard for the behavior that is deterred or the merits of the cases that are dismissed does not tell us whether or not antitrust enforcement levels are optimal.

Further reading:

Law review articles

Issue briefs

Shorter pieces

Intermediaries may not be the consumer welfare hero we want, but more often than not, they are one that we need.

In policy discussions about the digital economy, a background assumption that frequently underlies the discourse is that intermediaries and centralization always and only serve as a cost to consumers, and to society more generally. Thus, one commonly sees arguments that consumers would be better off if they could freely combine products from different trading partners. According to this logic, bundled goods, walled gardens, and other intermediaries are always to be regarded with suspicion, while interoperability, open source, and decentralization are laudable features of any market.

However, as with all economic goods, intermediation offers both costs and benefits. The challenge for market players is to assess these tradeoffs and, ultimately, to produce the optimal level of intermediation.

As one example, some observers assume that purchasing food directly from a producer benefits consumers because intermediaries no longer take a cut of the final purchase price. But this overlooks the tremendous efficiencies supermarkets can achieve in terms of cost savings, reduced carbon emissions (because consumers make fewer store trips), and other benefits that often outweigh the costs of intermediation.

The same anti-intermediary fallacy is plain to see in countless other markets. For instance, critics readily assume that insurance, mortgage, and travel brokers are just costly middlemen.

This unduly negative perception is perhaps even more salient in the digital world. Policymakers are quick to conclude that consumers are always better off when provided with “more choice.” Draft regulations of digital platforms have been introduced on both sides of the Atlantic that repeat this faulty argument ad nauseam, as do some antitrust decisions.

Even the venerable Tyler Cowen recently appeared to sing the praises of decentralization, when discussing the future of Web 3.0:

One person may think “I like the DeFi options at Uniswap,” while another may say, “I am going to use the prediction markets over at Hedgehog.” In this scenario there is relatively little intermediation and heavy competition for consumer attention. Thus most of the gains from competition accrue to the users. …

… I don’t know if people are up to all this work (or is it fun?). But in my view this is the best-case scenario — and the most technologically ambitious. Interestingly, crypto’s radical ability to disintermediate, if extended to its logical conclusion, could bring about a radical equalization of power that would lower the prices and values of the currently well-established crypto assets, companies and platforms.

While disintermediation certainly has its benefits, critics often gloss over its costs. For example, scams are practically nonexistent on Apple’s “centralized” App Store but are far more prevalent with Web3 services. Apple’s “power” to weed out nefarious actors certainly contributes to this difference. Similarly, there is a reason that “middlemen” like supermarkets and travel agents exist in the first place. They notably perform several complex tasks (e.g., searching for products, negotiating prices, and controlling quality) that leave consumers with a manageable selection of goods.

Returning to the crypto example, besides being a renowned scholar, Tyler Cowen is also an extremely savvy investor. What he sees as fun investment choices may be nightmarish (and potentially dangerous) decisions for less sophisticated consumers. The upshot is that intermediaries are far more valuable than they are usually given credit for.

Bringing People Together

The reason intermediaries (including online platforms) exist is to reduce transaction costs that suppliers and customers would face if they tried to do business directly. As Daniel F. Spulber argues convincingly:

Markets have two main modes of organization: decentralized and centralized. In a decentralized market, buyers and sellers match with each other and determine transaction prices. In a centralized market, firms act as intermediaries between buyers and sellers.

[W]hen there are many buyers and sellers, there can be substantial transaction costs associated with communication, search, bargaining, and contracting. Such transaction costs can make it more difficult to achieve cross-market coordination through direct communication. Intermediary firms have various means of reducing transaction costs of decentralized coordination when there are many buyers and sellers.

This echoes the findings of Nobel laureate Ronald Coase, who observed that firms emerge when they offer a cheaper alternative to multiple bilateral transactions:

The main reason why it is profitable to establish a firm would seem to be that there is a cost of using the price mechanism. The most obvious cost of “organising ” production through the price mechanism is that of discovering what the relevant prices are. […] The costs of negotiating and concluding a separate contract for each exchange transaction which takes place on a market must also be taken into account.

Economists generally agree that online platforms also serve this cost-reduction function. For instance, David Evans and Richard Schmalensee observe that:

Multi-sided platforms create value by bringing two or more different types of economic agents together and facilitating interactions between them that make all agents better off.

It’s easy to see the implications for today’s competition-policy debates, and for the online intermediaries that many critics would like to see decentralized. Particularly salient examples include app store platforms (such as the Apple App Store and the Google Play Store); online retail platforms (such as Amazon Marketplace); and online travel agents (like Booking.com and Expedia). Competition policymakers have embarked on countless ventures to “open up” these platforms to competition, essentially moving them further toward disintermediation. In most of these cases, however, policymakers appear to be fighting these businesses’ very raison d’être.

For example, the purpose of an app store is to curate the software that users can install and to offer payment solutions; in exchange, the store receives a cut of the proceeds. If performing these tasks created no value, then to a first approximation, these services would not exist. Users would simply download apps via their web browsers, and the most successful smartphones would be those that allowed users to directly install apps (“sideloading,” to use the more technical terms). Forcing these platforms to “open up” and become neutral is antithetical to the value proposition they offer.

Calls for retail and travel platforms to stop offering house brands or displaying certain products more favorably are equally paradoxical. Consumers turn to these platforms because they want a selection of goods. If that was not the case, users could simply bypass the platforms and purchase directly from independent retailers or hotels.Critics sometimes retort that some commercial arrangements, such as “most favored nation” clauses, discourage consumers from doing exactly this. But that claim only reinforces the point that online platforms must create significant value, or they would not be able to obtain such arrangements in the first place.

All of this explains why characterizing these firms as imposing a “tax” on their respective ecosystems is so deeply misleading. The implication is that platforms are merely passive rent extractors that create no value. Yet, barring the existence of market failures, both their existence and success is proof to the contrary. To argue otherwise places no faith in the ability of firms and consumers to act in their own self-interest.

A Little Evolution

This last point is even more salient when seen from an evolutionary standpoint. Today’s most successful intermediaries—be they online platforms or more traditional brick-and-mortar firms like supermarkets—mostly had to outcompete the alternative represented by disintermediated bilateral contracts.

Critics of intermediaries rarely contemplate why the app-store model outpaced the more heavily disintermediated software distribution of the desktop era. Or why hotel-booking sites exist, despite consumers’ ability to use search engines, hotel websites, and other product-search methods that offer unadulterated product selections. Or why mortgage brokers are so common when borrowers can call local banks directly. The list is endless.

Indeed, as I have argued previously:

Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see [other] intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

Fiat Versus Emergent Disintermediation

All of this is not to say that intermediaries are perfect, or that centralization always beats decentralization. Instead, the critical point is about the competitive process. There are vast differences between centralization that stems from government fiat and that which emerges organically.

(Dis)intermediation is an economic good. Markets thus play a critical role in deciding how much or little of it is provided. Intermediaries must charge fees that cover their costs, while bilateral contracts entail transaction costs. In typically Hayekian fashion, suppliers and buyers will weigh the costs and benefits of these options.

Intermediaries are most likely to emerge in markets prone to excessive transaction costs and competitive processes ensure that only valuable intermediaries survive. Accordingly, there is no guarantee that government-mandated disintermediation would generate net benefits in any given case.

Of course, the market does not always work perfectly. Sometimes, market failures give rise to excessive (or insufficient) centralization. And policymakers should certainly be attentive to these potential problems and address them on a case-by-case basis. But there is little reason to believe that today’s most successful intermediaries are the result of market failures, and it is thus critical that policymakers do not undermine the valuable role they perform.

For example, few believe that supermarkets exist merely because government failures (such as excessive regulation) or market failures (such as monopolization) prevent the emergence of smaller rivals. Likewise, the app-store model is widely perceived as an improvement over previous software platforms; few consumers appear favorably disposed toward its replacement with sideloading of apps (for example, few Android users choose to sideload apps rather than purchase them via the Google Play Store). In fact, markets appear to be moving in the opposite direction: even traditional software platforms such as Windows OS increasingly rely on closed stores to distribute software on their platforms.

More broadly, this same reasoning can (and has) been applied to other social institutions, such as the modern family. For example, the late Steven Horwitz observed that family structures have evolved in order to adapt to changing economic circumstances. Crucially, this process is driven by the same cost-benefit tradeoff that we see in markets. In both cases, agents effectively decide which functions are better performed within a given social structure, and which ones are more efficiently completed outside of it.

Returning to Tyler Cowen’s point about the future of Web3, the case can be made that whatever level of centralization ultimately emerges is most likely the best case scenario. Sure, there may be some market failures and suboptimal outcomes along the way, but they ultimately pale in comparison to the most pervasive force: namely, economic agents’ ability to act in what they perceive to be their best interest. To put it differently, if Web3 spontaneously becomes as centralized as Web 2.0 has been, that would be testament to the tremendous role that intermediaries play throughout the economy.

Apple’s legal team will be relieved that “you reap what you sow” is just a proverb. After a long-running antitrust battle against Qualcomm unsurprisingly ended in failure, Apple now faces antitrust accusations of its own (most notably from Epic Games). Somewhat paradoxically, this turn of events might cause Apple to see its previous defeat in a new light. Indeed, the well-established antitrust principles that scuppered Apple’s challenge against Qualcomm will now be the rock upon which it builds its legal defense.

But while Apple’s reversal of fortunes might seem anecdotal, it neatly illustrates a fundamental – and often overlooked – principle of antitrust policy: Antitrust law is about maximizing consumer welfare. Accordingly, the allocation of surplus between two companies is only incidentally relevant to antitrust proceedings, and it certainly is not a goal in and of itself. In other words, antitrust law is not about protecting David from Goliath.

Jockeying over the distribution of surplus

Or at least that is the theory. In practice, however, most antitrust cases are but small parts of much wider battles where corporations use courts and regulators in order to jockey for market position and/or tilt the distribution of surplus in their favor. The Microsoft competition suits brought by the DOJ and the European commission (in the EU and US) partly originated from complaints, and lobbying, by Sun Microsystems, Novell, and Netscape. Likewise, the European Commission’s case against Google was prompted by accusations from Microsoft and Oracle, among others. The European Intel case was initiated following a complaint by AMD. The list goes on.

The last couple of years have witnessed a proliferation of antitrust suits that are emblematic of this type of power tussle. For instance, Apple has been notoriously industrious in using the court system to lower the royalties that it pays to Qualcomm for LTE chips. One of the focal points of Apple’s discontent was Qualcomm’s policy of basing royalties on the end-price of devices (Qualcomm charged iPhone manufacturers a 5% royalty rate on their handset sales – and Apple received further rebates):

“The whole idea of a percentage of the cost of the phone didn’t make sense to us,” [Apple COO Jeff Williams] said. “It struck at our very core of fairness. At the time we were making something really really different.”

This pricing dispute not only gave rise to high-profile court cases, it also led Apple to lobby Standard Developing Organizations (“SDOs”) in a partly successful attempt to make them amend their patent policies, so as to prevent this type of pricing. 

However, in a highly ironic turn of events, Apple now finds itself on the receiving end of strikingly similar allegations. At issue is the 30% commission that Apple charges for in app purchases on the iPhone and iPad. These “high” commissions led several companies to lodge complaints with competition authorities (Spotify and Facebook, in the EU) and file antitrust suits against Apple (Epic Games, in the US).

Of course, these complaints are couched in more sophisticated, and antitrust-relevant, reasoning. But that doesn’t alter the fact that these disputes are ultimately driven by firms trying to tilt the allocation of surplus in their favor (for a more detailed explanation, see Apple and Qualcomm).

Pushback from courts: The Qualcomm case

Against this backdrop, a string of recent cases sends a clear message to would-be plaintiffs: antitrust courts will not be drawn into rent allocation disputes that have no bearing on consumer welfare. 

The best example of this judicial trend is Qualcomm’s victory before the United States Court of Appeal for the 9th Circuit. The case centered on the royalties that Qualcomm charged to OEMs for its Standard Essential Patents (SEPs). Both the district court and the FTC found that Qualcomm had deployed a series of tactics (rebates, refusals to deal, etc) that enabled it to circumvent its FRAND pledges. 

However, the Court of Appeal was not convinced. It failed to find any consumer harm, or recognizable antitrust infringement. Instead, it held that the dispute at hand was essentially a matter of contract law:

To the extent Qualcomm has breached any of its FRAND commitments, a conclusion we need not and do not reach, the remedy for such a breach lies in contract and patent law. 

This is not surprising. From the outset, numerous critics pointed that the case lied well beyond the narrow confines of antitrust law. The scathing dissenting statement written by Commissioner Maureen Olhaussen is revealing:

[I]n the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections. 

In reaching its conclusion, the Court notably rejected the notion that SEP royalties should be systematically based upon the “Smallest Saleable Patent Practicing Unit” (or SSPPU):

Even if we accept that the modem chip in a cellphone is the cellphone’s SSPPU, the district court’s analysis is still fundamentally flawed. No court has held that the SSPPU concept is a per se rule for “reasonable royalty” calculations; instead, the concept is used as a tool in jury cases to minimize potential jury confusion when the jury is weighing complex expert testimony about patent damages.

Similarly, it saw no objection to Qualcomm licensing its technology at the OEM level (rather than the component level):

Qualcomm’s rationale for “switching” to OEM-level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,” the second element of the Aspen Skiing exception. Aerotec Int’l, 836 F.3d at 1184 (internal quotation marks and citation omitted). Instead, Qualcomm responded to the change in patent-exhaustion law by choosing the path that was “far more lucrative,” both in the short term and the long term, regardless of any impacts on competition. 

Finally, the Court concluded that a firm breaching its FRAND pledges did not automatically amount to anticompetitive conduct: 

We decline to adopt a theory of antitrust liability that would presume anticompetitive conduct any time a company could not prove that the “fair value” of its SEP portfolios corresponds to the prices the market appears willing to pay for those SEPs in the form of licensing royalty rates.

Taken together, these findings paint a very clear picture. The Qualcomm Court repeatedly rejected the radical idea that US antitrust law should concern itself with the prices charged by monopolists — as opposed to practices that allow firms to illegally acquire or maintain a monopoly position. The words of Learned Hand and those of Antonin Scalia (respectively, below) loom large:

The successful competitor, having been urged to compete, must not be turned upon when he wins. 

And,

To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Other courts (both in the US and abroad) have reached similar conclusions

For instance, a district court in Texas dismissed a suit brought by Continental Automotive Systems (which supplies electronic systems to the automotive industry) against a group of SEP holders. 

Continental challenged the patent holders’ decision to license their technology at the vehicle rather than component level (the allegation is very similar to the FTC’s complaint that Qualcomm licensed its SEPs at the OEM, rather than chipset level). However, following a forceful intervention by the DOJ, the Court ultimately held that the facts alleged by Continental were not indicative of antitrust injury. It thus dismissed the case.

Likewise, within weeks of the Qualcomm and Continental decisions, the UK Supreme court also ruled in favor of SEP holders. In its Unwired Planet ruling, the Court concluded that discriminatory licenses did not automatically infringe competition law (even though they might breach a firm’s contractual obligations):

[I]t cannot be said that there is any general presumption that differential pricing for licensees is problematic in terms of the public or private interests at stake.

In reaching this conclusion, the UK Supreme Court emphasized that the determination of whether licenses were FRAND, or not, was first and foremost a matter of contract law. In the case at hand, the most important guide to making this determination were the internal rules of the relevant SDO (as opposed to competition case law):

Since price discrimination is the norm as a matter of licensing practice and may promote objectives which the ETSI regime is intended to promote (such as innovation and consumer welfare), it would have required far clearer language in the ETSI FRAND undertaking to indicate an intention to impose the more strict, “hard-edged” non-discrimination obligation for which Huawei contends. Further, in view of the prevalence of competition laws in the major economies around the world, it is to be expected that any anti-competitive effects from differential pricing would be most appropriately addressed by those laws

All of this ultimately led the Court to rule in favor of Unwired Planet, thus dismissing Huawei’s claims that it had infringed competition law by breaching its FRAND pledges. 

In short, courts and antitrust authorities on both sides of the Atlantic have repeatedly, and unambiguously, concluded that pricing disputes (albeit in the specific context of technological standards) are generally a matter of contract law. Antitrust/competition law intercedes only when unfair/excessive/discriminatory prices are both caused by anticompetitive behavior and result in anticompetitive injury.

Apple’s Loss is… Apple’s gain.

Readers might wonder how the above cases relate to Apple’s app store. But, on closer inspection the parallels are numerous. As explained above, courts have repeatedly stressed that antitrust enforcement should not concern itself with the allocation of surplus between commercial partners. Yet that is precisely what Epic Game’s suit against Apple is all about.

Indeed, Epic’s central claim is not that it is somehow foreclosed from Apple’s App Store (for example, because Apple might have agreed to exclusively distribute the games of one of Epic’s rivals). Instead, all of its objections are down to the fact that it would like to access Apple’s store under more favorable terms:

Apple’s conduct denies developers the choice of how best to distribute their apps. Developers are barred from reaching over one billion iOS users unless they go through Apple’s App Store, and on Apple’s terms. […]

Thus, developers are dependent on Apple’s noblesse oblige, as Apple may deny access to the App Store, change the terms of access, or alter the tax it imposes on developers, all in its sole discretion and on the commercially devastating threat of the developer losing access to the entire iOS userbase. […]

By imposing its 30% tax, Apple necessarily forces developers to suffer lower profits, reduce the quantity or quality of their apps, raise prices to consumers, or some combination of the three.

And the parallels with the Qualcomm litigation do not stop there. Epic is effectively asking courts to make Apple monetize its platform at a different level than the one that it chose to maximize its profits (no more monetization at the app store level). Similarly, Epic Games omits any suggestion of profit sacrifice on the part of Apple — even though it is a critical element of most unilateral conduct theories of harm. Finally, Epic is challenging conduct that is both the industry norm and emerged in a highly competitive setting.

In short, all of Epic’s allegations are about monopoly prices, not monopoly maintenance or monopolization. Accordingly, just as the SEP cases discussed above were plainly beyond the outer bounds of antitrust enforcement (something that the DOJ repeatedly stressed with regard to the Qualcomm case), so too is the current wave of antitrust litigation against Apple. When all is said and done, Apple might thus be relieved that Qualcomm was victorious in their antitrust confrontation. Indeed, the legal principles that caused its demise against Qualcomm are precisely the ones that will, likely, enable it to prevail against Epic Games.

Jonathan B. Baker, Nancy L. Rose, Steven C. Salop, and Fiona Scott Morton don’t like vertical mergers:

Vertical mergers can harm competition, for example, through input foreclosure or customer foreclosure, or by the creation of two-level entry barriers.  … Competitive harms from foreclosure can occur from the merged firm exercising its increased bargaining leverage to raise rivals’ costs or reduce rivals’ access to the market. Vertical mergers also can facilitate coordination by eliminating a disruptive or “maverick” competitor at one vertical level, or through information exchange. Vertical mergers also can eliminate potential competition between the merging parties. Regulated firms can use vertical integration to evade rate regulation. These competitive harms normally occur when at least one of the markets has an oligopoly structure. They can lead to higher prices, lower output, quality reductions, and reduced investment and innovation.

Baker et al. go so far as to argue that any vertical merger in which the downstream firm is subject to price regulation should face a presumption that the merger is anticompetitive.

George Stigler’s well-known article on vertical integration identifies several ways in which vertical integration increases welfare by subverting price controls:

The most important of these other forces, I believe, is the failure of the price system (because of monopoly or public regulation) to clear markets at prices within the limits of the marginal cost of the product (to the buyer if he makes it) and its marginal-value product (to the seller if he further fabricates it). This phenomenon was strikingly illustrated by the spate of vertical mergers in the United States during and immediately after World War II, to circumvent public and private price control and allocations. A regulated price of OA was set (Fig. 2), at which an output of OM was produced. This quantity had a marginal value of OB to buyers, who were rationed on a nonprice basis. The gain to buyers  and sellers combined from a free price of NS was the shaded area, RST, and vertical integration was the simple way of obtaining this gain. This was the rationale of the integration of radio manufacturers into cabinet manufacture, of steel firms into fabricated products, etc.

Stigler was on to something:

  • In 1947, Emerson Radio acquired Plastimold, a maker of plastic radio cabinets. The president of Emerson at the time, Benjamin Abrams, stated “Plastimold is an outstanding producer of molded radio cabinets and gives Emerson an assured source of supply of one of the principal components in the production of radio sets.” [emphasis added] 
  • In the same year, the Congressional Record reported, “Admiral Corp. like other large radio manufacturers has reached out to take over a manufacturer of radio cabinets, the Chicago Cabinet Corp.” 
  • In 1948, the Federal Trade Commission ascribed wartime price controls and shortages as reasons for vertical mergers in the textiles industry as well as distillers’ acquisitions of wineries.

While there may have been some public policy rationale for price controls, it’s clear the controls resulted in shortages and a deadweight loss in many markets. As such, it’s likely that vertical integration to avoid the price controls improved consumer welfare (if only slightly, as in the figure above) and reduced the deadweight loss.

Rather than leading to monopolization, Stigler provides examples in which vertical integration was employed to circumvent monopolization by cartel quotas and/or price-fixing: “Almost every raw-material cartel has had trouble with customers who wish to integrate backward, in order to negate the cartel prices.”

In contrast to Stigler’s analysis, Salop and Daniel P. Culley begin from an implied assumption that where price regulation occurs, the controls are good for society. Thus, they argue avoidance of the price controls are harmful or against the public interest:

Example: The classic example is the pre-divestiture behavior of AT&T, which allegedly used its purchases of equipment at inflated prices from its wholly-owned subsidiary, Western Electric, to artificially increase its costs and so justify higher regulated prices.

This claim is supported by the court in U.S. v. AT&T [emphasis added]:

The Operating Companies have taken these actions, it is said, because the existence of rate of return regulation removed from them the burden of such additional expense, for the extra cost could simply be absorbed into the rate base or expenses, allowing extra profits from the higher prices to flow upstream to Western rather than to its non-Bell competition.

Even so, the pass-through of higher costs seems only a minor concern to the court relative to the “three hats” worn by AT&T and its subsidiaries in the (1) setting of standards, (2) counseling of operating companies in their equipment purchases, and (3) production of equipment for sale to the operating companies [emphasis added]:

The government’s evidence has depicted defendants as sole arbiters of what equipment is suitable for use in the Bell System a role that carries with it a power of subjective judgment that can be and has been used to advance the sale of Western Electric’s products at the expense of the general trade. First, AT&T, in conjunction with Bell Labs and Western Electric, sets the technical standards under which the telephone network operates and the compatibility specifications which equipment must meet. Second, Western Electric and Bell Labs … serve as counselors to the Operating Companies in their procurement decisions, ostensibly helping them to purchase equipment that meets network standards. Third, Western also produces equipment for sale to the Operating Companies in competition with general trade manufacturers.

The upshot of this “wearing of three hats” is, according to the government’s evidence, a rather obviously anticompetitive situation. By setting technical or compatibility standards and by either not communicating these standards to the general trade or changing them in mid-stream, AT&T has the capacity to remove, and has in fact removed, general trade products from serious consideration by the Operating Companies on “network integrity” grounds. By either refusing to evaluate general trade products for the Operating Companies or producing biased or speculative evaluations, AT&T has been able to influence the Operating Companies, which lack independent means to evaluate general trade products, to buy Western. And the in-house production and sale of Western equipment provides AT&T with a powerful incentive to exercise its “approval” power to discriminate against Western’s competitors.

It’s important to keep in mind that rate of return regulation was not thrust upon AT&T, it was a quid pro quo in which state and federal regulators acted to eliminate AT&T/Bell competitors in exchange for price regulation. In a floor speech to Congress in 1921, Rep. William J. Graham declared:

It is believed to be better policy to have one telephone system in a community that serves all the people, even though it may be at an advanced rate, property regulated by State boards or commissions, than it is to have two competing telephone systems.

For purposes of Salop and Culley’s integration-to-evade-price-regulation example, it’s important to keep in mind that AT&T acquired Western Electric in 1882, or about two decades before telephone pricing regulation was contemplated and eight years before the Sherman Antitrust Act. While AT&T may have used vertical integration to take advantage of rate-of-return price regulation, it’s simply not true that AT&T acquired Western Electric to evade price controls.

Salop and Culley provide a more recent example:

Example: Potential evasion of regulation concerns were raised in the FTC’s analysis in 2008 of the Fresenius/Daiichi Sankyo exclusive sub-license for a Daiichi Sankyo pharmaceutical used in Fresenius’ dialysis clinics, which potentially could allow evasion of Medicare pricing regulations.

As with the AT&T example, this example is not about evasion of price controls. Rather it raises concerns about taking advantage of Medicare’s pricing formula. 

At the time of the deal, Medicare reimbursed dialysis clinics based on a drug manufacturer’s Average Sales Price (“ASP”) plus six percent, where ASP was calculated by averaging the prices paid by all customers, including any discounts or rebates. 

The FTC argued by setting an artificially high transfer price of the drug to Fresenius, the ASP would increase, thereby increasing the Medicare reimbursement to all clinics providing the same drug (which not only would increase the costs to Medicare but also would increase income to all clinics providing the drug). Although the FTC claims this would be anticompetitive, the agency does not describe in what ways competition would be harmed.

The FTC introduces an interesting wrinkle in noting that a few years after the deal would have been completed, “substantial changes to the Medicare program relating to dialysis services … would eliminate the regulations that give rise to the concerns created by the proposed transaction.” Specifically, payment for dialysis services would shift from fee-for-service to capitation.

This wrinkle highlights a serious problem with a presumption that any purported evasion of price controls is an antitrust violation. Namely, if the controls go away, so does the antitrust violation. 

Conversely–as Salop and Culley seem to argue with their AT&T example–a vertical merger could be retroactively declared anticompetitive if price controls are imposed after the merger is completed (even decades later and even if the price regulations were never anticipated at the time of the merger). 

It’s one thing to argue that avoiding price regulation runs counter to public interest, but it’s another thing to argue that avoiding price regulation is anticompetitive. Indeed, as Stigler argues, if the price controls stifle competition, then avoidance of the controls may enhance competition. Placing such mergers under heightened scrutiny, such as an anticompetitive presumption, is a solution in search of a problem.

This guest post is by Jonathan M. Barnett, Torrey H. Webb Professor Law, University of Southern California Gould School of Law.

It has become virtual received wisdom that antitrust law has been subdued by economic analysis into a state of chronic underenforcement. Following this line of thinking, many commentators applauded the Antitrust Division’s unsuccessful campaign to oppose the acquisition of Time-Warner by AT&T and some (unsuccessfully) urged the Division to take stronger action against the acquisition of most of Fox by Disney. The arguments in both cases followed a similar “big is bad” logic. Consolidating control of a large portfolio of creative properties (Fox plus Disney) or integrating content production and distribution capacities (Time-Warner plus AT&T) would exacerbate market concentration, leading to reduced competition and some combination of higher prices and reduced product for consumers. 

Less than 18 months after the closing of both transactions, those concerns seem to have been largely unwarranted. 

Far from precipitating any decline in product output or variety, both transactions have been followed by a vigorous burst of competition in the digital streaming market. In place of the Amazon plus Netflix bottleneck (with Hulu trailing behind), consumers now, or in 2020 will, have a choice of at least four new streaming services with original content, Disney+, AT&T’s “HBO Max”, Apple’s “Apple TV+” and Comcast’s NBCUniversal “Peacock” services. Critically, each service relies on a formidable combination of creative, financing and technological capacities that can only be delivered by a firm of sufficiently large size and scale.  As modern antitrust law has long recognized, it turns out that “big” is sometimes not bad.

Where’s the Harm?

At present, it is hard to see any net consumer harm arising from the concurrence of increased size and increased competition. 

On the supply side, this is just the next episode in the ongoing “Golden Age of Television” in which content producers have enjoyed access to exceptional funding to support high-value productions.  It has been reported that Apple TV+’s new “Morning Show” series will cost $15 million per episode while similar estimates are reported for hit shows such as HBO’s “Game of Thrones” and Netflix’s “The Crown.”  Each of those services is locked in a fierce competition to gain and retain sufficient subscribers to earn a return on those investments, which leads directly to the next happy development.

On the demand side, consumers enjoy a proliferating array of streaming services, ranging from free ad-supported services to subscription ad-free services. Consumers can now easily “cut the cord” and assemble a customized bundle of preferred content from multiple services, each of which is less costly than a traditional cable package and can generally be cancelled at any time.  Current market performance does not plausibly conform to the declining output, limited variety or increasing prices that are the telltale symptoms of a less than competitive market.

Real-World v. Theoretical Markets

The market’s favorable trajectory following these two controversial transactions should not be surprising. When scrutinized against the actual characteristics of real-world digital content markets, rather than stylized theoretical models or antiquated pre-digital content markets, the arguments leveled against these transactions never made much sense. There were two fundamental and related errors. 

Error #1: Content is Scarce

Advocates for antitrust intervention assumed that entry barriers into the content market were high, in which case it followed that the owner of an especially valuable creative portfolio could exert pricing power to consumers’ detriment. Yet, in reality, funding for content production is plentiful and even a service that has an especially popular show is unlikely to have sustained pricing power in the face of a continuous flow of high-value productions being released by formidable competitors. The amounts being spent on content in 2019 by leading streaming services are unprecedented, ranging from a reported $15 billion for Netflix to an estimated $6 billion for Amazon and Apple TV+ to an estimated $3.9 billion for AT&T’s HBO Max. It is also important to note that a hit show is often a mobile asset that a streaming or other video distribution service has licensed from independent production companies and other rights holders. Once the existing deal expires, those rights are available for purchase by the highest bidder. For example, in 2019, Netflix purchased the streaming rights to “Seinfeld”, Viacom purchased the cable rights to “Seinfeld”, and HBO Max purchased the streaming rights to “South Park.” Similarly, the producers behind a hit show are always free to take their talents to competitors once any existing agreement terminates.

Error #2: Home Pay-TV is a “Monopoly”

Advocates of antitrust action were looking at the wrong market—or more precisely, the market as it existed about a decade ago. The theory that AT&T’s acquisition of Time-Warner’s creative portfolio would translate into pricing power in the home pay-TV market mighthave been plausible when consumers had no reasonable alternative to the local cable provider. But this argument makes little sense today when consumers are fleeing bulky home pay-TV bundles for cheaper cord-cutting options that deliver more targeted content packages to a mobile device.  In 2019, a “home” pay-TV market is fast becoming an anachronism and hence a home pay-TV “monopoly” largely reduces to a formalism that, with the possible exception of certain live programming, is unlikely to translate into meaningful pricing power. 

Wait a Second! What About the HBO Blackout?

A skeptical reader might reasonably object that this mostly rosy account of the post-merger home video market is unpersuasive since it does not address the ongoing blackout of HBO (now an AT&T property) on the Dish satellite TV service. Post-merger commentary that remains skeptical of the AT&T/Time-Warner merger has focused on this dispute, arguing that it “proves” that the government was right since AT&T is purportedly leveraging its new ownership of HBO to disadvantage one of its competitors in the pay-TV market. This interpretation tends to miss the forest for the trees (or more precisely, a tree).  

The AT&T/Dish dispute over HBO is only one of over 200 “carriage” disputes resulting in blackouts that have occurred this year, which continues an upward trend since approximately 2011. Some of those include Dish’s dispute with Univision (settled in March 2019 after a nine-month blackout) and AT&T’s dispute (as pay-TV provider) with Nexstar (settled in August 2019 after a nearly two-month blackout). These disputes reflect the fact that the flood of subscriber defections from traditional pay-TV to mobile streaming has made it difficult for pay-TV providers to pass on the fees sought by content owners. As a result, some pay-TV providers adopt the negotiating tactic of choosing to drop certain content until the terms improve, just as AT&T, in its capacity as a pay-TV provider, dropped CBS for three weeks in July and August 2019 pending renegotiation of licensing terms. It is the outward shift in the boundaries of the economically relevant market (from home to home-plus-mobile video delivery), rather than market power concerns, that best accounts for periodic breakdowns in licensing negotiations.  This might even be viewed positively from an antitrust perspective since it suggests that the “over the top” market is putting pressure on the fees that content owners can extract from providers in the traditional pay-TV market.

Concluding Thoughts

It is common to argue today that antitrust law has become excessively concerned about “false positives”– that is, the possibility of blocking a transaction or enjoining a practice that would have benefited consumers. Pending future developments, this early post-mortem on the regulatory and judicial treatment of these two landmark media transactions suggests that there are sometimes good reasons to stay the hand of the court or regulator. This is especially the case when a generational market shift is in progress and any regulator’s or judge’s foresight is likely to be guesswork. Antitrust law’s “failure” to stop these transactions may turn out to have been a ringing success.

In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.

Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .

Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.

Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.

PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).

In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.

In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.

Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.

It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.

But, what if vertical efficiencies were not the primary reason for the acquisitions?

Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.

Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.

Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.

Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases. 

The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.

Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.

On Tuesday, August 28, 2018, Truth on the Market and the International Center for Law and Economics presented a blog symposium — Is Amazon’s Appetite Bottomless? The Whole Foods Merger After One Year — that looked at the concerns surrounding the closing of the Amazon-Whole Foods merger, and how those concerns had played out over the last year.

The difficulty presented by the merger was, in some ways, its lack of difficulty: Even critics, while hearkening back to the Brandeisian fear of large firms, had little by way of legal objection to offer against the merger. Despite the acknowledged lack of an obvious legal basis for challenging the merger, most critics nevertheless expressed a somewhat inchoate and generalized concern that the merger would hasten the death of brick-and-mortar retail and imperil competition in the grocery industry. Critics further pointed to particular, related issues largely outside the scope of modern antitrust law — issues relating to the presumed effects of the merger on “localism” (i.e., small, local competitors), retail workers, startups with ancillary businesses (e.g., delivery services), data collection and use, and the like.

Steven Horwitz opened the symposium with an insightful and highly recommended post detailing the development of the grocery industry from its inception. Tracing through that history, Horwitz was optimistic that

Viewed from the long history of the evolution of the grocery store, the Amazon-Whole Foods merger made sense as the start of the next stage of that historical process. The combination of increased wealth that is driving the demand for upscale grocery stores, and the corresponding increase in the value of people’s time that is driving the demand for one-stop shopping and various forms of pick-up and delivery, makes clear the potential benefits of this merger.

Others in the symposium similarly acknowledged the potential transformation of the industry brought on by the merger, but challenged the critics’ despairing characterization of that transformation (Auer, Manne & Stout, Rinehart, Fruits, Atkinson).

At the most basic level, it was noted that, in the immediate aftermath of the merger, Whole Foods dropped prices across a number of categories as it sought to shore up its competitive position (Auer). Further, under relevant antitrust metrics — e.g., market share, ease of competitive entry, potential for exclusionary conduct — the merger was completely unobjectionable under existing doctrine (Fruits).

To critics’ claims that Amazon in general, and the merger in particular, was decimating the retail industry, several posts discussed the updated evidence suggesting that retail is not actually on the decline (although some individual retailers are certainly struggling to compete) (Auer, Manne & Stout). Moreover, and following from Horwitz’s account of the evolution of the grocery industry, it appears that the actual trajectory of the industry is not an either/or between online and offline, but instead a movement toward integrating both models into a single retail experience (Manne & Stout). Further, the post-merger flurry of business model innovation, venture capital investment, and new startup activity demonstrates that, confronted with entrepreneurial competitors like Walmart, Kroger, Aldi, and Instacart, Amazon’s impressive position online has not translated into an automatic domination of the traditional grocery industry (Manne & Stout).  

Symposium participants more circumspect about the merger suggested that Amazon’s behavior may be laying the groundwork for an eventual monopsony case (Sagers). Further, it was suggested, a future Section 2 case, difficult under prevailing antitrust orthodoxy, could be brought with a creative approach to market definition in light of Amazon’s conduct with its marketplace participants, its aggressive ebook contracting practices, and its development and roll-out of its own private label brands (Sagers).

Skeptics also picked up on early critics’ concerns about the aggregation of large amounts of consumer data, and worried that the merger could be part of a pattern representing a real, long-term threat to consumers that antitrust does not take seriously enough (Bona & Levitsky). Sounding a further alarm, Hal Singer noted that Amazon’s interest in pushing into new markets with data generated by, for example, devices like its Echo line could bolster its ability to exclude competitors.

More fundamentally, these contributors echoed the merger critics’ concerns that antitrust does not adequately take account of other values such as “promoting local, community-based, organic food production or ‘small firms’ in general.” (Bona & Levitsky; Singer).

Rob Atkinson, however, pointed out that these values are idiosyncratic and not likely shared by the vast majority of the population — and that antitrust law shouldn’t have anything to do with them:

In short, most of the opposition to Amazon/Whole Foods merger had little or nothing to do with economics and consumer welfare. It had everything to do with a competing vision for the kind of society we want to live in. The neo-Brandesian opponents, who Lind and I term “progressive localists”, seek an alternative economy predominantly made up of small firms, supported by big government and protected from global competition.

And Dirk Auer noted that early critics’ prophecies of foreclosure of competition through “data leveraging” and below-cost pricing hadn’t remotely come to pass, thus far.

Meanwhile, other contributors noted the paucity of evidence supporting many of these assertions, and pointed out the manifest value the merger seemed to be creating by pressuring competitors to adapt and better respond to consumers’ preferences (Horwitz, Rinehart, Auer, Fruits, Manne & Stout) — in the process shoring up, rather than killing, even smaller retailers that are willing and able to evolve with changing technology and shifting consumer preferences. “For all the talk of retail dying, the stores that are actually dying are the ones that fail to cater to their customers, not the ones that happen to be offline” (Manne & Stout).

At the same time, not all merger skeptics were moved by the Neo-Brandeisian assertions. Chris Sagers, for example, finds much of the populist antitrust objection more public relations than substance. He suggested perhaps not taking these ideas and their promoters so seriously, and instead focusing on antitrust advocates with “real ideas” (like Sagers himself, of course).

Coming from a different angle, Will Rinehart also suggested not taking the criticisms too seriously, pointing to the evolving and complicated effects of the merger as Exhibit A for the need for regulatory humility:

Finally, this deal reiterates the need for regulatory humility. Almost immediately after the Amazon-Whole Foods merger was closed, prices at the store dropped and competitors struck a flurry of deals. Investments continue and many in the grocery retail space are bracing for a wave of enhancement to take hold. Even some of the most fierce critics of deal will have to admit there is a lot of uncertainty. It is unclear what business model will make the most sense in the long run, how these technologies will ultimately become embedded into production processes, and how consumers will benefit. Combined, these features underscore the difficulty, but the necessity, in implementing dynamic insights into antitrust institutions.

Offering generous praise for this symposium (thanks, Will!) and echoing the points made by other participants regarding the dynamic and unknowable course of competition (Auer, Horwitz, Manne & Stout, Fruits), Rinehart concludes:

Retrospectives like this symposium offer a chance to understand what the discussion missed at the time and what is needed to better understand innovation and competition in markets. While it might be too soon to close the book on this case, the impact can already be felt in the positions others are taking in response. In the end, the deal probably won’t be remembered for extending Amazon’s dominance into another market because that is a phantom concern. Rather, it will probably be best remembered as the spark that drove traditional retail outlets to modernize their logistics and fulfillment efforts.  

For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners, and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.

We’d like to thank all of the participants for their excellent contributions!

 

What actually happened in the year following the merger is nearly the opposite: Competition among grocery stores has been more fierce than ever. “Offline” retailers are expanding — and innovating — to meet Amazon’s challenge, and many of them are booming. Disruption is never neat and tidy, but, in addition to saving Whole Foods from potential oblivion, the merger seems to have lit a fire under the rest of the industry.
This result should not be surprising to anyone who understands the nature of the competitive process. But it does highlight an important lesson: competition often comes from unexpected quarters and evolves in unpredictable ways, emerging precisely out of the kinds of adversity opponents of the merger bemoaned.

Continue Reading...

So why this deal, in this symposium, and why now? The best substantive reason I could think of is admittedly one that I personally find important. As I said, I think we should take it much more seriously as a general matter, especially in highly dynamic contexts like Silicon Valley. There has been a history of arguably pre-emptive, market-occupying vertical and conglomerate acquisitions, by big firms of smaller ones that are technologically or otherwise disruptive. The idea is that the big firms sit back and wait as some new market develops in some adjacent sector. When that new market ripens to the point of real promise, the big firm buys some significant incumbent player. The aim is not. just to facilitate its own benevolent, wholesome entry, but to set up hopefully prohibitive challenges to other de novo entrants. Love it or leave it, that theory plausibly characterizes lots and lots of acquisitions in recent decades that secured easy antitrust approval, precisely because they weren’t obviously, presently horizontal. Many people think that is true of some of Amazon’s many acquisitions, like its notoriously aggressive, near-hostile takeover of Diapers.com.

Continue Reading...

Amazon offers Prime discounts to Whole Food customers and offers free delivery for Prime members. Those are certainly consumer benefits. But with those comes a cost, which may or may not be significant. By bundling its products with collective discounts, Amazon makes it more attractive for shoppers to shift their buying practices from local stores to the internet giant. Will this eventually mean that local stores will become more inefficient, based on lower volume, and will eventually close? Do most Americans care about the potential loss of local supermarkets and specialty grocers? No one, including antitrust enforcers, seems to have asked them.

Continue Reading...