Archives For

The Senate Judiciary Committee is set to debate S. 2992, the American Innovation and Choice Online Act (or AICOA) during a markup session Thursday. If passed into law, the bill would force online platforms to treat rivals’ services as they would their own, while ensuring their platforms interoperate seamlessly.

The bill marks the culmination of misguided efforts to bring Big Tech to heel, regardless of the negative costs imposed upon consumers in the process. ICLE scholars have written about these developments in detail since the bill was introduced in October.

Below are 10 significant misconceptions that underpin the legislation.

1. There Is No Evidence that Self-Preferencing Is Generally Harmful

Self-preferencing is a normal part of how platforms operate, both to improve the value of their core products and to earn returns so that they have reason to continue investing in their development.

Platforms’ incentives are to maximize the value of their entire product ecosystem, which includes both the core platform and the services attached to it. Platforms that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product. Those that preference inferior products end up hurting their attractiveness to users of their “core” product, exposing themselves to competition from rivals.

As Geoff Manne concludes, the notion that it is harmful (notably to innovation) when platforms enter into competition with edge providers is entirely speculative. Indeed, a range of studies show that the opposite is likely true. Platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.

Consider a few examples from the empirical literature:

  1. Li and Agarwal (2017) find that Facebook’s integration of Instagram led to a significant increase in user demand both for Instagram itself and for the entire category of photography apps. Instagram’s integration with Facebook increased consumer awareness of photography apps, which benefited independent developers, as well as Facebook.
  2. Foerderer, et al. (2018) find that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally.
  3. Cennamo, et al. (2018) find that video games offered by console firms often become blockbusters and expand the consoles’ installed base. As a result, these games increase the potential for all independent game developers to profit from their games, even in the face of competition from first-party games.
  4. Finally, while Zhu and Liu (2018) is often held up as demonstrating harm from Amazon’s competition with third-party sellers on its platform, its findings are actually far from clear-cut. As co-author Feng Zhu noted in the Journal of Economics & Management Strategy: “[I]f Amazon’s entries attract more consumers, the expanded customer base could incentivize more third‐ party sellers to join the platform. As a result, the long-term effects for consumers of Amazon’s entry are not clear.”

2. Interoperability Is Not Costless

There are many things that could be interoperable, but aren’t. The reason not everything is interoperable is because interoperability comes with costs, as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds.

As Sam Bowman has observed, there are often costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen: consumers will choose products that are not interoperable.

In short, we cannot infer from the mere absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

3. Consumers Often Prefer Closed Ecosystems

Digital markets could have taken a vast number of shapes. So why have they gravitated toward the very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones?

Indeed, if recent commentary is to be believed, it is the latter that should succeed, because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into that breach. But this does not seem to be happening in the digital economy.

The naïve answer is to say that the absence of “open” systems is precisely the problem. What’s harder is to try to actually understand why. As I have written, there are many reasons that consumers might prefer “closed” systems, even when they have to pay a premium for them.

Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and on consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform.

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision.

They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire.

Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. What some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said it best when he quipped that economists always find a monopoly explanation for things that they simply fail to understand.

4. Data Portability Can Undermine Security and Privacy

As explained above, platforms that are more tightly controlled can be regulated by the platform owner to avoid some of the risks present in more open platforms. Apple’s App Store, for example, is a relatively closed and curated platform, which gives users assurance that apps will meet a certain standard of security and trustworthiness.

Along similar lines, there are privacy issues that arise from data portability. Even a relatively simple requirement to make photos available for download can implicate third-party interests. Making a user’s photos more broadly available may tread upon the privacy interests of friends whose faces appear in those photos. Importing those photos to a new service potentially subjects those individuals to increased and un-bargained-for security risks.

As Sam Bowman and Geoff Manne observe, this is exactly what happened with Facebook and its Social Graph API v1.0, ultimately culminating in the Cambridge Analytica scandal. Because v1.0 of Facebook’s Social Graph API permitted developers to access information about a user’s friends without consent, it enabled third-party access to data about exponentially more users. It appears that some 270,000 users granted data access to Cambridge Analytica, from which the company was able to obtain information on 50 million Facebook users.

In short, there is often no simple solution to implement interoperability and data portability. Any such program—whether legally mandated or voluntarily adopted—will need to grapple with these and other tradeoffs.

5. Network Effects Are Rarely Insurmountable

Several scholars in recent years have called for more muscular antitrust intervention in networked industries on grounds that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in and raise entry barriers for potential rivals (see here, here, and here). But there are countless counterexamples where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I wrote in April 2019 (a year before the COVID-19 pandemic):

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Geoff Manne and Alec Stapp have put forward a multitude of other examples,  including: the demise of Yahoo; the disruption of early instant-messaging applications and websites; and MySpace’s rapid decline. In all of these cases, outcomes did not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network-effects theory, they eviscerate the belief, common in antitrust circles, that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. The question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet, this question is systematically omitted from most policy discussions.

6. Profits Facilitate New and Exciting Platforms

As I wrote in August 2020, the relatively closed model employed by several successful platforms (notably Apple’s App Store, Google’s Play Store, and the Amazon Retail Platform) allows previously unknown developers/retailers to rapidly expand because (i) users do not have to fear their apps contain some form of malware and (ii) they greatly reduce payments frictions, most notably security-related ones.

While these are, indeed, tremendous benefits, another important upside seems to have gone relatively unnoticed. The “closed” business model also gives firms significant incentives to develop new distribution mediums (smart TVs spring to mind) and to improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.

The economics of two-sided markets are enlightening here. For example, Apple and Google’s app stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks.” That is, they compete aggressively (among themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users.

This dynamic gives firms significant incentive to continue to attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video, and games was one of the driving forces behind the launch of the iPad.

This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms, as would likely be the case under the American Innovation and Choice Online Act.

7. Large Market Share Does Not Mean Anticompetitive Outcomes

Scholars routinely cite the putatively strong concentration of digital markets to argue that Big Tech firms do not face strong competition. But this is a non sequitur. Indeed, as economists like Joseph Bertrand and William Baumol have shown, what matters is not whether markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, that alone will discipline incumbents’ behavior.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

Unfortunately, critics’ failure to meaningfully grapple with these issues serves to shape the “conventional wisdom” in tech-policy debates.

8. Vertical Integration Generally Benefits Consumers

Vertical behavior of digital firms—whether through mergers or through contract and unilateral action—frequently arouses the ire of critics of the current antitrust regime. Many such critics point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. But the findings of these few studies are regularly overstated and, even if taken at face value, represent a just minuscule fraction of the collected evidence, which overwhelmingly supports vertical integration.

There is strong and longstanding empirical evidence that vertical integration is competitively benign. This includes widely acclaimed work by economists Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama) and Margaret Slade, whose meta-analysis led them to conclude:

[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.

In short, there is a substantial body of both empirical and theoretical research showing that vertical integration (and the potential vertical discrimination and exclusion to which it might give rise) is generally beneficial to consumers. While it is possible that vertical mergers or discrimination could sometimes cause harm, the onus is on the critics to demonstrate empirically where this occurs. No legitimate interpretation of the available literature would offer a basis for imposing a presumption against such behavior.

9. There Is No Such Thing as Data Network Effects

Although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As was shown by the survey pf the empirical literature that Geoff Manne and I performed (published in the George Mason Law Review), data generally entails diminishing marginal returns:

Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around. Indeed, Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace.

Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

10.  Antitrust Enforcement Has Not Been Lax

The popular narrative has it that lax antitrust enforcement has led to substantially increased concentration, strangling the economy, harming workers, and expanding dominant firms’ profit margins at the expense of consumers. Much of the contemporary dissatisfaction with antitrust arises from a suspicion that overly lax enforcement of existing laws has led to record levels of concentration and a concomitant decline in competition. But both beliefs—lax enforcement and increased anticompetitive concentration—wither under more than cursory scrutiny.

As Geoff Manne observed in his April 2020 testimony to the House Judiciary Committee:

The number of Sherman Act cases brought by the federal antitrust agencies, meanwhile, has been relatively stable in recent years, but several recent blockbuster cases have been brought by the agencies and private litigants, and there has been no shortage of federal and state investigations. The vast majority of Section 2 cases dismissed on the basis of the plaintiff’s failure to show anticompetitive effect were brought by private plaintiffs pursuing treble damages; given the incentives to bring weak cases, it cannot be inferred from such outcomes that antitrust law is ineffective. But, in any case, it is highly misleading to count the number of antitrust cases and, using that number alone, to make conclusions about how effective antitrust law is. Firms act in the shadow of the law, and deploy significant legal resources to make sure they avoid activity that would lead to enforcement actions. Thus, any given number of cases brought could be just as consistent with a well-functioning enforcement regime as with an ill-functioning one.

The upshot is that naïvely counting antitrust cases (or the purported lack thereof), with little regard for the behavior that is deterred or the merits of the cases that are dismissed does not tell us whether or not antitrust enforcement levels are optimal.

Further reading:

Law review articles

Issue briefs

Shorter pieces

Intermediaries may not be the consumer welfare hero we want, but more often than not, they are one that we need.

In policy discussions about the digital economy, a background assumption that frequently underlies the discourse is that intermediaries and centralization always and only serve as a cost to consumers, and to society more generally. Thus, one commonly sees arguments that consumers would be better off if they could freely combine products from different trading partners. According to this logic, bundled goods, walled gardens, and other intermediaries are always to be regarded with suspicion, while interoperability, open source, and decentralization are laudable features of any market.

However, as with all economic goods, intermediation offers both costs and benefits. The challenge for market players is to assess these tradeoffs and, ultimately, to produce the optimal level of intermediation.

As one example, some observers assume that purchasing food directly from a producer benefits consumers because intermediaries no longer take a cut of the final purchase price. But this overlooks the tremendous efficiencies supermarkets can achieve in terms of cost savings, reduced carbon emissions (because consumers make fewer store trips), and other benefits that often outweigh the costs of intermediation.

The same anti-intermediary fallacy is plain to see in countless other markets. For instance, critics readily assume that insurance, mortgage, and travel brokers are just costly middlemen.

This unduly negative perception is perhaps even more salient in the digital world. Policymakers are quick to conclude that consumers are always better off when provided with “more choice.” Draft regulations of digital platforms have been introduced on both sides of the Atlantic that repeat this faulty argument ad nauseam, as do some antitrust decisions.

Even the venerable Tyler Cowen recently appeared to sing the praises of decentralization, when discussing the future of Web 3.0:

One person may think “I like the DeFi options at Uniswap,” while another may say, “I am going to use the prediction markets over at Hedgehog.” In this scenario there is relatively little intermediation and heavy competition for consumer attention. Thus most of the gains from competition accrue to the users. …

… I don’t know if people are up to all this work (or is it fun?). But in my view this is the best-case scenario — and the most technologically ambitious. Interestingly, crypto’s radical ability to disintermediate, if extended to its logical conclusion, could bring about a radical equalization of power that would lower the prices and values of the currently well-established crypto assets, companies and platforms.

While disintermediation certainly has its benefits, critics often gloss over its costs. For example, scams are practically nonexistent on Apple’s “centralized” App Store but are far more prevalent with Web3 services. Apple’s “power” to weed out nefarious actors certainly contributes to this difference. Similarly, there is a reason that “middlemen” like supermarkets and travel agents exist in the first place. They notably perform several complex tasks (e.g., searching for products, negotiating prices, and controlling quality) that leave consumers with a manageable selection of goods.

Returning to the crypto example, besides being a renowned scholar, Tyler Cowen is also an extremely savvy investor. What he sees as fun investment choices may be nightmarish (and potentially dangerous) decisions for less sophisticated consumers. The upshot is that intermediaries are far more valuable than they are usually given credit for.

Bringing People Together

The reason intermediaries (including online platforms) exist is to reduce transaction costs that suppliers and customers would face if they tried to do business directly. As Daniel F. Spulber argues convincingly:

Markets have two main modes of organization: decentralized and centralized. In a decentralized market, buyers and sellers match with each other and determine transaction prices. In a centralized market, firms act as intermediaries between buyers and sellers.

[W]hen there are many buyers and sellers, there can be substantial transaction costs associated with communication, search, bargaining, and contracting. Such transaction costs can make it more difficult to achieve cross-market coordination through direct communication. Intermediary firms have various means of reducing transaction costs of decentralized coordination when there are many buyers and sellers.

This echoes the findings of Nobel laureate Ronald Coase, who observed that firms emerge when they offer a cheaper alternative to multiple bilateral transactions:

The main reason why it is profitable to establish a firm would seem to be that there is a cost of using the price mechanism. The most obvious cost of “organising ” production through the price mechanism is that of discovering what the relevant prices are. […] The costs of negotiating and concluding a separate contract for each exchange transaction which takes place on a market must also be taken into account.

Economists generally agree that online platforms also serve this cost-reduction function. For instance, David Evans and Richard Schmalensee observe that:

Multi-sided platforms create value by bringing two or more different types of economic agents together and facilitating interactions between them that make all agents better off.

It’s easy to see the implications for today’s competition-policy debates, and for the online intermediaries that many critics would like to see decentralized. Particularly salient examples include app store platforms (such as the Apple App Store and the Google Play Store); online retail platforms (such as Amazon Marketplace); and online travel agents (like Booking.com and Expedia). Competition policymakers have embarked on countless ventures to “open up” these platforms to competition, essentially moving them further toward disintermediation. In most of these cases, however, policymakers appear to be fighting these businesses’ very raison d’être.

For example, the purpose of an app store is to curate the software that users can install and to offer payment solutions; in exchange, the store receives a cut of the proceeds. If performing these tasks created no value, then to a first approximation, these services would not exist. Users would simply download apps via their web browsers, and the most successful smartphones would be those that allowed users to directly install apps (“sideloading,” to use the more technical terms). Forcing these platforms to “open up” and become neutral is antithetical to the value proposition they offer.

Calls for retail and travel platforms to stop offering house brands or displaying certain products more favorably are equally paradoxical. Consumers turn to these platforms because they want a selection of goods. If that was not the case, users could simply bypass the platforms and purchase directly from independent retailers or hotels.Critics sometimes retort that some commercial arrangements, such as “most favored nation” clauses, discourage consumers from doing exactly this. But that claim only reinforces the point that online platforms must create significant value, or they would not be able to obtain such arrangements in the first place.

All of this explains why characterizing these firms as imposing a “tax” on their respective ecosystems is so deeply misleading. The implication is that platforms are merely passive rent extractors that create no value. Yet, barring the existence of market failures, both their existence and success is proof to the contrary. To argue otherwise places no faith in the ability of firms and consumers to act in their own self-interest.

A Little Evolution

This last point is even more salient when seen from an evolutionary standpoint. Today’s most successful intermediaries—be they online platforms or more traditional brick-and-mortar firms like supermarkets—mostly had to outcompete the alternative represented by disintermediated bilateral contracts.

Critics of intermediaries rarely contemplate why the app-store model outpaced the more heavily disintermediated software distribution of the desktop era. Or why hotel-booking sites exist, despite consumers’ ability to use search engines, hotel websites, and other product-search methods that offer unadulterated product selections. Or why mortgage brokers are so common when borrowers can call local banks directly. The list is endless.

Indeed, as I have argued previously:

Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see [other] intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

Fiat Versus Emergent Disintermediation

All of this is not to say that intermediaries are perfect, or that centralization always beats decentralization. Instead, the critical point is about the competitive process. There are vast differences between centralization that stems from government fiat and that which emerges organically.

(Dis)intermediation is an economic good. Markets thus play a critical role in deciding how much or little of it is provided. Intermediaries must charge fees that cover their costs, while bilateral contracts entail transaction costs. In typically Hayekian fashion, suppliers and buyers will weigh the costs and benefits of these options.

Intermediaries are most likely to emerge in markets prone to excessive transaction costs and competitive processes ensure that only valuable intermediaries survive. Accordingly, there is no guarantee that government-mandated disintermediation would generate net benefits in any given case.

Of course, the market does not always work perfectly. Sometimes, market failures give rise to excessive (or insufficient) centralization. And policymakers should certainly be attentive to these potential problems and address them on a case-by-case basis. But there is little reason to believe that today’s most successful intermediaries are the result of market failures, and it is thus critical that policymakers do not undermine the valuable role they perform.

For example, few believe that supermarkets exist merely because government failures (such as excessive regulation) or market failures (such as monopolization) prevent the emergence of smaller rivals. Likewise, the app-store model is widely perceived as an improvement over previous software platforms; few consumers appear favorably disposed toward its replacement with sideloading of apps (for example, few Android users choose to sideload apps rather than purchase them via the Google Play Store). In fact, markets appear to be moving in the opposite direction: even traditional software platforms such as Windows OS increasingly rely on closed stores to distribute software on their platforms.

More broadly, this same reasoning can (and has) been applied to other social institutions, such as the modern family. For example, the late Steven Horwitz observed that family structures have evolved in order to adapt to changing economic circumstances. Crucially, this process is driven by the same cost-benefit tradeoff that we see in markets. In both cases, agents effectively decide which functions are better performed within a given social structure, and which ones are more efficiently completed outside of it.

Returning to Tyler Cowen’s point about the future of Web3, the case can be made that whatever level of centralization ultimately emerges is most likely the best case scenario. Sure, there may be some market failures and suboptimal outcomes along the way, but they ultimately pale in comparison to the most pervasive force: namely, economic agents’ ability to act in what they perceive to be their best interest. To put it differently, if Web3 spontaneously becomes as centralized as Web 2.0 has been, that would be testament to the tremendous role that intermediaries play throughout the economy.

Antitrust policymakers around the world have taken a page out of the Silicon Valley playbook and decided to “move fast and break things.” While the slogan is certainly catchy, applying it to the policymaking world is unfortunate and, ultimately, threatens to harm consumers.

Several antitrust authorities in recent months have announced their intention to block (or, at least, challenge) a spate of mergers that, under normal circumstances, would warrant only limited scrutiny and face little prospect of outright prohibition. This is notably the case of several vertical mergers, as well as mergers between firms that are only potential competitors (sometimes framed as “killer acquisitions”). These include Facebook’s acquisition of Giphy (U.K.); Nvidia’s ARM Ltd. deal (U.S., EU, and U.K.), and Illumina’s purchase of GRAIL (EU). It is also the case for horizontal mergers in non-concentrated markets, such as WarnerMedia’s proposed merger with Discovery, which has faced significant political backlash.

Some of these deals fail even to implicate “traditional” merger-notification thresholds. Facebook’s purchase of Giphy was only notifiable because of the U.K. Competition and Markets Authority’s broad interpretation of its “share of supply test” (which eschews traditional revenue thresholds). Likewise, the European Commission relied on a highly controversial interpretation of the so-called “Article 22 referral” procedure in order to review Illumina’s GRAIL purchase.

Some have praised these interventions, claiming antitrust authorities should take their chances and prosecute high-profile deals. It certainly appears that authorities are pressing their luck because they face few penalties for wrongful prosecutions. Overly aggressive merger enforcement might even reinforce their bargaining position in subsequent cases. In other words, enforcers risk imposing social costs on firms and consumers because their incentives to prosecute mergers are not aligned with those of society as a whole.

None of this should come as a surprise to anyone who has been following this space. As my ICLE colleagues and I have been arguing for quite a while, weakening the guardrails that surround merger-review proceedings opens the door to arbitrary interventions that are difficult (though certainly not impossible) to remediate before courts.

The negotiations that surround merger-review proceedings involve firms and authorities bargaining in the shadow of potential litigation. Whether and which concessions are made will depend chiefly on what the parties believe will be the outcome of litigation. If firms think courts will safeguard their merger, they will offer authorities few potential remedies. Conversely, if authorities believe courts will support their decision to block a merger, they are unlikely to accept concessions that stop short of the parties withdrawing their deal.

This simplified model suggests that neither enforcers nor merging parties are in position to “exploit” the merger-review process, so long as courts review decisions effectively. Under this model, overly aggressive enforcement would merely lead to defeat in court (and, expecting this, merging parties would offer few concessions to authorities).

Put differently, court proceedings are both a dispute-resolution mechanism and a source of rulemaking. The result is that only marginal cases should lead to actual disputes. Most harmful mergers will be deterred, and clearly beneficial ones will be cleared rapidly. So long as courts apply the consumer welfare standard consistently, firms’ merger decisions—along with any rulings or remedies—all should primarily serve consumers’ interests.

At least, that is the theory. But there are factors that can serve to undermine this efficient outcome. In the field of merger control, this is notably the case with court delays that prevent parties from effectively challenging merger decisions.

While delays between when a legal claim is filed and a judgment is rendered aren’t always detrimental (as Richard Posner observes, speed can be costly), it is essential that these delays be accounted for in any subsequent damages and penalties. Parties that prevail in court might otherwise only obtain reparations that are below the market rate, reducing the incentive to seek judicial review in the first place.

The problem is particularly acute when it comes to merger reviews. Merger challenges might lead the parties to abandon a deal because they estimate the transaction will no longer be commercially viable by the time courts have decided the matter. This is a problem, insofar as neither U.S. nor EU antitrust law generally requires authorities to compensate parties for wrongful merger decisions. For example, courts in the EU have declined to fully compensate aggrieved companies (e.g., the CFI in Schneider) and have set an exceedingly high bar for such claims to succeed at all.

In short, parties have little incentive to challenge merger decisions if the only positive outcome is for their deals to be posthumously sanctified. This smaller incentive to litigate may be insufficient to create enough cases that would potentially helpful precedent for future merging firms. Ultimately, the balance of bargaining power is tilted in favor of competition authorities.

Some Data on Mergers

While not necessarily dispositive, there is qualitative evidence to suggest that parties often drop their deals when authorities either block them (as in the EU) or challenge them in court (in the United States).

U.S. merging parties nearly always either reach a settlement or scrap their deal when their merger is challenged. There were 43 transactions challenged by either the U.S. Justice Department (15) or the Federal Trade Commission (28) in 2020. Of these, 15 were abandoned and almost all the remaining cases led to settlements.

The EU picture is similar. The European Commission blocks, on average, about one merger every year (30 over the last 31 years). Most in-depth investigations are settled in exchange for remedies offered by the merging firms (141 out of 239). While the EU does not publish detailed statistics concerning abandoned mergers, it is rare for firms to appeal merger-prohibition decisions. The European Court of Justice’s database lists only six such appeals over a similar timespan. The vast majority of blocked mergers are scrapped, with the parties declining to appeal.

This proclivity to abandon mergers is surprising, given firms’ high success rate in court. Of the six merger-annulment appeals in the ECJ’s database (CK Hutchison Holdings Ltd.’s acquisition of Telefónica Europe Plc; Ryanair’s acquisition of a controlling stake in Aer Lingus; a proposed merger between Deutsche Börse and NYSE Euronext; Tetra Laval’s takeover of Sidel Group; a merger between Schneider Electric SA and Legrand SA; and Airtours’ acquisition of First Choice) merging firms won four of them. While precise numbers are harder to come by in the United States, it is also reportedly rare for U.S. antitrust enforcers to win merger-challenge cases.

One explanation is that only marginal cases ever make it to court. In other words, firms with weak cases are, all else being equal, less likely to litigate. However, that is unlikely to explain all abandoned deals.

There are documented cases in which it was clearly delays, rather than self-selection, that caused firms to scrap planned mergers. In the EU’s Airtours proceedings, the merging parties dropped their transaction even though they went on to prevail in court (and First Choice, the target firm, was acquired by another rival). This is inconsistent with the notion that proposed mergers are abandoned only when the parties have a weak case to challenge (the Commission’s decision was widely seen as controversial).

Antitrust policymakers also generally acknowledge that mergers are often time-sensitive. That’s why merger rules on both sides of the Atlantic tend to impose strict timelines within which antitrust authorities must review deals.

In the end, if self-selection based on case strength were the only criteria merging firms used in deciding to appeal a merger challenge, one would not expect an equilibrium in which firms prevail in more than two-thirds of cases. If firms anticipated that a successful court case would preserve a multi-billion dollar merger, the relatively small burden of legal fees should not dissuade them from litigating, even if their chance of success was tiny. We would expect to see more firms losing in court.

The upshot is that antitrust challenges and prohibition decisions likely cause at least some firms to abandon their deals because court proceedings are not seen as an effective remedy. This perception, in turn, reinforces authorities’ bargaining position and thus encourages firms to offer excessive remedies in hopes of staving off lengthy litigation.

Conclusion

A general rule of policymaking is that rules should seek to ensure that agents internalize both the positive and negative effects of their decisions. This, in turn, should ensure that they behave efficiently.

In the field of merger control, those incentives are misaligned. Given the prevailing political climate on both sides of the Atlantic, challenging large corporate acquisitions likely generates important political capital for antitrust authorities. But wrongful merger prohibitions are unlikely to elicit the kinds of judicial rebukes that would compel authorities to proceed more carefully.

Put differently, in the field of antitrust law, court proceedings ought to serve as a guardrail to ensure that enforcement decisions ultimately benefit consumers. When that shield is removed, it is no longer a given that authorities—who, in theory, act as agents of society—will act in the best interests of that society, rather than maximize their own preferences.

Ideally, we should ensure that antitrust authorities bear the social costs of faulty decisions, by compensating, at least, the direct victims of their actions (i.e., the merging firms). However, this would likely require new legislation to that effect, as there currently are too many obstacles to such cases. It is thus unlikely to represent a short-term solution.

In the meantime, regulatory restraint appears to be the only realistic solution. Or, one might say, authorities should “move carefully and avoid breaking stuff.”

The European Commission and its supporters were quick to claim victory following last week’s long-awaited General Court of the European Union ruling in the Google Shopping case. It’s hard to fault them. The judgment is ostensibly an unmitigated win for the Commission, with the court upholding nearly every aspect of its decision. 

However, the broader picture is much less rosy for both the Commission and the plaintiffs. The General Court’s ruling notably provides strong support for maintaining the current remedy package, in which rivals can bid for shopping box placement. This makes the Commission’s earlier rejection of essentially the same remedy  in 2014 look increasingly frivolous. It also pours cold water on rivals’ hopes that it might be replaced with something more far-reaching.

More fundamentally, the online world continues to move further from the idealistic conception of an “open internet” that regulators remain determined to foist on consumers. Indeed, users consistently choose convenience over openness, thus rejecting the vision of online markets upon which both the Commission’s decision and the General Court’s ruling are premised. 

The Google Shopping case will ultimately prove to be both a pyrrhic victory and a monument to the pitfalls of myopic intervention in digital markets.

Google’s big remedy win

The main point of law addressed in the Google Shopping ruling concerns the distinction between self-preferencing and refusals to deal. Contrary to Google’s defense, the court ruled that self-preferencing can constitute a standalone abuse of Article 102 of the Treaty on the Functioning of the European Union (TFEU). The Commission was thus free to dispense with the stringent conditions laid out in the 1998 Bronner ruling

This undoubtedly represents an important victory for the Commission, as it will enable it to launch new proceedings against both Google and other online platforms. However, the ruling will also constrain the Commission’s available remedies, and rightly so.

The origins of the Google Shopping decision are enlightening. Several rivals sought improved access to the top of the Google Search page. The Commission was receptive to those calls, but faced important legal constraints. The natural solution would have been to frame its case as a refusal to deal, which would call for a remedy in which a dominant firm grants rivals access to its infrastructure (be it physical or virtual). But going down this path would notably have required the Commission to show that effective access was “indispensable” for rivals to compete (one of the so-called Bronner conditions)—something that was most likely not the case here. 

Sensing these difficulties, the Commission framed its case in terms of self-preferencing, surmising that this would entail a much softer legal test. The General Court’s ruling vindicates this assessment (at least barring a successful appeal by Google):

240    It must therefore be concluded that the Commission was not required to establish that the conditions set out in the judgment of 26 November 1998, Bronner (C‑7/97, EU:C:1998:569), were satisfied […]. [T]he practices at issue are an independent form of leveraging abuse which involve […] ‘active’ behaviour in the form of positive acts of discrimination in the treatment of the results of Google’s comparison shopping service, which are promoted within its general results pages, and the results of competing comparison shopping services, which are prone to being demoted.

This more expedient approach, however, entails significant limits that will undercut both the Commission and rivals’ future attempts to extract more far-reaching remedies from Google.

Because the underlying harm is no longer the denial of access, but rivals being treated less favorably, the available remedies are much narrower. Google must merely ensure that it does not treat itself more preferably than rivals, regardless whether those rivals ultimately access its infrastructure and manage to compete. The General Court says this much when it explains the theory of harm in the case at hand:

287. Conversely, even if the results from competing comparison shopping services would be particularly relevant for the internet user, they can never receive the same treatment as results from Google’s comparison shopping service, whether in terms of their positioning, since, owing to their inherent characteristics, they are prone to being demoted by the adjustment algorithms and the boxes are reserved for results from Google’s comparison shopping service, or in terms of their display, since rich characters and images are also reserved to Google’s comparison shopping service. […] they can never be shown in as visible and as eye-catching a way as the results displayed in Product Universals.

Regulation 1/2003 (Art. 7.1) ensures the European Commission can only impose remedies that are “proportionate to the infringement committed and necessary to bring the infringement effectively to an end.” This has obvious ramifications for the Google Shopping remedy.

Under the remedy accepted by the Commission, Google agreed to auction off access to the Google Shopping box. Google and rivals would thus compete on equal footing to display comparison shopping results.

Illustrations taken from Graf & Mostyn, 2020

Rivals and their consultants decried this outcome; and Margrethe Vestager intimated the commission might review the remedy package. Both camps essentially argued the remedy did not meaningfully boost traffic to rival comparison shopping services (CSSs), because those services were not winning the best auction slots:

All comparison shopping services other than Google’s are hidden in plain sight, on a tab behind Google’s default comparison shopping page. Traffic cannot get to them, but instead goes to Google and on to merchants. As a result, traffic to comparison shopping services has fallen since the remedy—worsening the original abuse.

Or, as Margrethe Vestager put it:

We may see a show of rivals in the shopping box. We may see a pickup when it comes to clicks for merchants. But we still do not see much traffic for viable competitors when it comes to shopping comparison

But these arguments are entirely beside the point. If the infringement had been framed as a refusal to supply, it might be relevant that rivals cannot access the shopping box at what is, for them,  cost-effective price. Because the infringement was framed in terms of self-preferencing, all that matters is whether Google treats itself equally.

I am not aware of a credible claim that this is not the case. At best, critics have suggested the auction mechanism favors Google because it essentially pays itself:

The auction mechanism operated by Google to determine the price paid for PLA clicks also disproportionately benefits Google. CSSs are discriminated against per clickthrough, as they are forced to cede most of their profit margin in order to successfully bid […] Google, contrary to rival CSSs, does not, in reality, have to incur the auction costs and bid away a great part of its profit margins.

But this reasoning completely omits Google’s opportunity costs. Imagine a hypothetical (and oversimplified) setting where retailers are willing to pay Google or rival CSSs 13 euros per click-through. Imagine further that rival CSSs can serve these clicks at a cost of 2 euros, compared to 3 euros for Google (excluding the auction fee). Google is less efficient in this hypothetical. In this setting, rivals should be willing to bid up to 11 euros per click (the difference between what they expect to earn and their other costs). Critics claim Google will accept to bid higher because the money it pays itself during the auction is not really a cost (it ultimately flows to Google’s pockets). That is clearly false. 

To understand this, readers need only consider Google’s point of view. On the one hand, it could pay itself 11 euros (and some tiny increment) to win the auction. Its revenue per click-through would be 10 euros (13 euros per click-through, minus its cost of 3 euros). On the other hand, it could underbid rivals by a tiny increment, ensuring they bid 11 euros. When its critics argue that Google has an advantage because it pays itself, they are ultimately claiming that 10 is larger than 11.

Google’s remedy could hardly be more neutral. If it wins more auction slots than rivals CSSs, the appropriate inference should be that it is simply more efficient. Nothing in the Commission’s decision or the General Court’s ruling precludes that outcome. In short, while Google has (for the time being, at least) lost its battle to appeal the Commission’s decision, the remedy package—the same it put forward way back in 2014—has never looked stronger.

Good news for whom?

The above is mostly good news for both Google and consumers, who will be relieved that the General Court’s ruling preserves Google’s ability to show specialized boxes (of which the shopping unit is but one example). But that should not mask the tremendous downsides of both the Commission’s case and the court’s ruling. 

The Commission and rivals’ misapprehensions surrounding the Google Shopping remedy, as well as the General Court’s strong stance against self-preferencing, are revealing of a broader misunderstanding about online markets that also permeates through other digital regulation initiatives like the Digital Markets Act and the American Choice and Innovation Act. 

Policymakers wrongly imply that platform neutrality is a good in and of itself. They assume incumbent platforms generally have an incentive to favor their own services, and that preventing them from doing so is beneficial to both rivals and consumers. Yet neither of these statements is correct.

Economic research suggests self-preferencing is only harmful in exceptional circumstances. That is true of the traditional literature on platform threats (here and here), where harm is premised on the notion that rivals will use the downstream market, ultimately, to compete with an upstream incumbent. It’s also true in more recent scholarship that compares dual mode platforms to pure marketplaces and resellers, where harm hinges on a platform being able to immediately imitate rivals’ offerings. Even this ignores the significant efficiencies that might simultaneously arise from self-preferencing and closed platforms, more broadly. In short, rules that categorically prohibit self-preferening by dominant platforms overshoot the mark, and the General Court’s Google Shopping ruling is a troubling development in that regard.

It is also naïve to think that prohibiting self-preferencing will automatically benefit rivals and consumers (as opposed to harming the latter and leaving the former no better off). If self-preferencing is not anticompetitive, then propping up inefficient firms will at best be a futile exercise in preserving failing businesses. At worst, it would impose significant burdens on consumers by destroying valuable synergies between the platform and its own downstream service.

Finally, if the past years teach us anything about online markets, it is that consumers place a much heavier premium on frictionless user interfaces than on open platforms. TikTok is arguably a much more “closed” experience than other sources of online entertainment, like YouTube or Reddit (i.e. users have less direct control over their experience). Yet many observers have pinned its success, among other things, on its highly intuitive and simple interface. The emergence of Vinted, a European pre-owned goods platform, is another example of competition through a frictionless user experience.

There is a significant risk that, by seeking to boost “choice,” intervention by competition regulators against self-preferencing will ultimately remove one of the benefits users value most. By increasing the information users need to process, there is a risk that non-discrimination remedies will merely add pain points to the underlying purchasing process. In short, while Google Shopping is nominally a victory for the Commission and rivals, it is also a testament to the futility and harmfulness of myopic competition intervention in digital markets. Consumer preferences cannot be changed by government fiat, nor can the fact that certain firms are more efficient than others (at least, not without creating significant harm in the process). It is time this simple conclusion made its way into European competition thinking.

Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.

This narrative has become ubiquitous in policymaking circles. Thinking of this sort notably underpins high-profile reports on competition in digital markets (here, here, and here), as well ensuing attempts to regulate digital platforms, such as the draft American Innovation and Choice Online Act and the EU’s Digital Markets Act.

But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.

The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.

The Bertrand Paradox

In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).

Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.

By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:

There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.

This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):

If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.

This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).

But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:

On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.

All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).

The Theory of Contestable Markets

Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.

Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:

In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.

For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.

In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.

Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.

This has far-reaching implications for antitrust policy, as Baumol was quick to point out:

This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.

Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.

To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration. 

How Contestable Are Digital Markets?

The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.

The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.

Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.

First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.

These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.

Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.

A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).

Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.

Conclusion

Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.

Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.

Still from Squid Game, Netflix and Siren Pictures Inc., 2021

Recent commentary on the proposed merger between WarnerMedia and Discovery, as well as Amazon’s acquisition of MGM, often has included the suggestion that the online content-creation and video-streaming markets are excessively consolidated, or that they will become so absent regulatory intervention. For example, in a recent letter to the U.S. Justice Department (DOJ), the American Antitrust Institute and Public Knowledge opine that:

Slow and inadequate oversight risks the streaming market going the same route as cable—where consumers have little power, few options, and where consolidation and concentration reign supreme. A number of threats to competition are clear, as discussed in this section, including: (1) market power issues surrounding content and (2) the role of platforms in “gatekeeping” to limit competition.

But the AAI/PK assessment overlooks key facts about the video-streaming industry, some of which suggest that, if anything, these markets currently suffer from too much fragmentation.

The problem is well-known: any individual video-streaming service will offer only a fraction of the content that viewers want, but budget constraints limit the number of services that a household can afford to subscribe to. It may be counterintuitive, but consolidation in the market for video-streaming can solve both problems at once.

One subscription is not enough

Surveys find that U.S. households currently maintain, on average, four video-streaming subscriptions. This explains why even critics concede that a plethora of streaming services compete for consumer eyeballs. For instance, the AAI and PK point out that:

Today, every major media company realizes the value of streaming and a bevy of services have sprung up to offer different catalogues of content.

These companies have challenged the market leader, Netflix and include: Prime Video (2006), Hulu (2007), Paramount+ (2014), ESPN+ (2018), Disney+ (2019), Apple TV+ (2019), HBO Max (2020), Peacock (2020), and Discovery+ (2021).

With content scattered across several platforms, multiple subscriptions are the only way for households to access all (or most) of the programs they desire. Indeed, other than price, library sizes and the availability of exclusive content are reportedly the main drivers of consumer purchase decisions.

Of course, there is nothing inherently wrong with the current equilibrium in which consumers multi-home across multiple platforms. One potential explanation is demand for high-quality exclusive content, which requires tremendous investment to develop and promote. Production costs for TV series routinely run in the tens of millions of dollars per episode (see here and here). Economic theory predicts these relationship-specific investments made by both producers and distributors will cause producers to opt for exclusive distribution or vertical integration. The most sought-after content is thus exclusive to each platform. In other words, exclusivity is likely the price that users must pay to ensure that high-quality entertainment continues to be produced.

But while this paradigm has many strengths, the ensuing fragmentation can be detrimental to consumers, as this may lead to double marginalization or mundane issues like subscription fatigue. Consolidation can be a solution to both.

Substitutes, complements, or unrelated?

As Hal Varian explains in his seminal book, the relationship between two goods can range among three extremes: perfect substitutes (i.e., two goods are perfectly interchangeable); perfect complements (i.e., there is no value to owning one good without the other); or goods that exist in independent markets (i.e., the price of one good does not affect demand for the other).

These distinctions are critical when it comes to market concentration. All else equal—which is obviously not the case in reality—increased concentration leads to lower prices for complements, and higher prices for substitutes. Finally, if demand for two goods is unrelated, then bringing them under common ownership should not affect their price.

To at least some extent, streaming services should be seen as complements rather than substitutes—or, at least, as services with unrelated demand. If they were perfect substitutes, consumers would be indifferent between two Netflix subscriptions or one Netflix plan and one Amazon Prime plan. That is obviously not the case. Nor are they perfect complements, which would mean that Netflix is worthless without Amazon Prime, Disney+, and other services.

However, there is reason to believe there exists some complementarity between streaming services, or at least that demand for them is independent. Most consumers subscribe to multiple services, and almost no one subscribes to the same service twice:

SOURCE: Finance Buzz

This assertion is also supported by the ubiquitous bundling of subscriptions in the cable distribution industry, which also has recently been seen in video-streaming markets. For example, in the United States, Disney+ can be purchased in a bundle with Hulu and ESPN+.

The key question is: is each service more valuable, less valuable, or as valuable in isolation than they are when bundled? If households place some additional value on having a complete video offering (one that includes child entertainment, sports, more mature content, etc.), and if they value the convenience of accessing more of their content via a single app, then we can infer these services are to some extent complementary.

Finally, it is worth noting that any complementarity between these services would be largely endogenous. If the industry suddenly switched to a paradigm of non-exclusive content—as is broadly the case for audio streaming—the above analysis would be altered (though, as explained above, such a move would likely be detrimental to users). Streaming services would become substitutes if they offered identical catalogues.

In short, the extent to which streaming services are complements ultimately boils down to an empirical question that may fluctuate with industry practices. As things stand, there is reason to believe that these services feature some complementarities, or at least that demand for them is independent. In turn, this suggests that further consolidation within the industry would not lead to price increases and may even reduce them.

Consolidation can enable price discrimination

It is well-established that bundling entertainment goods can enable firms to better engage in price discrimination, often increasing output and reducing deadweight loss in the process.

Take George Stigler’s famous explanation for the practice of “block booking,” in which movie studios sold multiple films to independent movie theatres as a unit. Stigler assumes the underlying goods are neither substitutes nor complements:

Stigler, George J. (1963) “United States v. Loew’s Inc.: A Note on Block-Booking,” Supreme Court Review: Vol. 1963 : No. 1 , Article 2.

The upshot is that, when consumer tastes for content are idiosyncratic—as is almost certainly the case for movies and television series, movies—it can counterintuitively make sense to sell differing content as a bundle. In doing so, the distributor avoids pricing consumers out of the content upon which they place a lower value. Moreover, this solution is more efficient than price discriminating on an unbundled basis, as doing so would require far more information on the seller’s part and would be vulnerable to arbitrage.

In short, bundling enables each consumer to access a much wider variety of content. This, in turn, provides a powerful rationale for mergers in the video-streaming space—particularly where they can bring together varied content libraries. Put differently, it cuts in favor of more, not less, concentration in video-streaming markets (at least, up to a certain point).

Finally, a wide array of scale-related economies further support the case for concentration in video-streaming markets. These include potential economies of scale, network effects, and reduced transaction costs.

The simplest of these ideas is that the cost of video streaming may decrease at the margin (i.e., serving each marginal viewer might be cheaper than the previous one). In other words, mergers of video-streaming services mayenable platforms to operate at a more efficient scale. There has notably been some discussion of whether Netflix benefits from scale economies of this sort. But this is, of course, ultimately an empirical question. As I have written with Geoffrey Manne, we should not assume that this is the case for all digital platforms, or that these increasing returns are present at all ranges of output.

Likewise, the fact that content can earn greater revenues by reaching a wider audience (or a greater number of small niches) may increase a producer’s incentive to create high-quality content. For example, Netflix’s recent hit series Squid Game reportedly cost $16.8 million to produce a total of nine episodes. This is significant for a Korean-language thriller. These expenditures were likely only possible because of Netflix’s vast network of viewers. Video-streaming mergers can jump-start these effects by bringing previously fragmented audiences onto a single platform.

Finally, operating at a larger scale may enable firms and consumers to economize on various transaction and search costs. For instance, consumers don’t need to manage several subscriptions, and searching for content is easier within a single ecosystem.

Conclusion

In short, critics could hardly be more wrong in assuming that consolidation in the video-streaming industry will necessarily harm consumers. To the contrary, these mergers should be presumptively welcomed because, to a first approximation, they are likely to engender lower prices and reduce deadweight loss.

Critics routinely draw parallels between video streaming and the consolidation that previously moved through the cable industry. They suggest these events as evidence that consolidation was (and still is) inefficient and exploitative of consumers. As AAI and PK frame it:

Moreover, given the broader competition challenges that reside in those markets, and the lessons learned from a failure to ensure competition in the traditional MVPD markets, enforcers should be particularly vigilant.

But while it might not have been ideal for all consumers, the comparatively laissez-faire approach to competition in the cable industry arguably facilitated the United States’ emergence as a global leader for TV programming. We are now witnessing what appears to be a similar trend in the online video-streaming market.

This is mostly a good thing. While a single streaming service might not be the optimal industry configuration from a welfare standpoint, it would be equally misguided to assume that fragmentation necessarily benefits consumers. In fact, as argued throughout this piece, there are important reasons to believe that the status quo—with at least 10 significant players—is too fragmented and that consumers would benefit from additional consolidation.

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

Policy discussions about the use of personal data often have “less is more” as a background assumption; that data is overconsumed relative to some hypothetical optimal baseline. This overriding skepticism has been the backdrop for sweeping new privacy regulations, such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR).

More recently, as part of the broad pushback against data collection by online firms, some have begun to call for creating property rights in consumers’ personal data or for data to be treated as labor. Prominent backers of the idea include New York City mayoral candidate Andrew Yang and computer scientist Jaron Lanier.

The discussion has escaped the halls of academia and made its way into popular media. During a recent discussion with Tesla founder Elon Musk, comedian and podcast host Joe Rogan argued that Facebook is “one gigantic information-gathering business that’s decided to take all of the data that people didn’t know was valuable and sell it and make f***ing billions of dollars.” Musk appeared to agree.

The animosity exhibited toward data collection might come as a surprise to anyone who has taken Econ 101. Goods ideally end up with those who value them most. A firm finding profitable ways to repurpose unwanted scraps is just the efficient reallocation of resources. This applies as much to personal data as to literal trash.

Unfortunately, in the policy sphere, few are willing to recognize the inherent trade-off between the value of privacy, on the one hand, and the value of various goods and services that rely on consumer data, on the other. Ideally, policymakers would look to markets to find the right balance, which they often can. When the transfer of data is hardwired into an underlying transaction, parties have ample room to bargain.

But this is not always possible. In some cases, transaction costs will prevent parties from bargaining over the use of data. The question is whether such situations are so widespread as to justify the creation of data property rights, with all of the allocative inefficiencies they entail. Critics wrongly assume the solution is both to create data property rights and to allocate them to consumers. But there is no evidence to suggest that, at the margin, heightened user privacy necessarily outweighs the social benefits that new data-reliant goods and services would generate. Recent experience in the worlds of personalized medicine and the fight against COVID-19 help to illustrate this point.

Data Property Rights and Personalized Medicine

The world is on the cusp of a revolution in personalized medicine. Advances such as the improved identification of biomarkers, CRISPR genome editing, and machine learning, could usher a new wave of treatments to markedly improve health outcomes.

Personalized medicine uses information about a person’s own genes or proteins to prevent, diagnose, or treat disease. Genetic-testing companies like 23andMe or Family Tree DNA, with the large troves of genetic information they collect, could play a significant role in helping the scientific community to further medical progress in this area.

However, despite the obvious potential of personalized medicine, many of its real-world applications are still very much hypothetical. While governments could act in any number of ways to accelerate the movement’s progress, recent policy debates have instead focused more on whether to create a system of property rights covering personal genetic data.

Some raise concerns that it is pharmaceutical companies, not consumers, who will reap the monetary benefits of the personalized medicine revolution, and that advances are achieved at the expense of consumers’ and patients’ privacy. They contend that data property rights would ensure that patients earn their “fair” share of personalized medicine’s future profits.

But it’s worth examining the other side of the coin. There are few things people value more than their health. U.S. governmental agencies place the value of a single life at somewhere between $1 million and $10 million. The commonly used quality-adjusted life year metric offers valuations that range from $50,000 to upward of $300,000 per incremental year of life.

It therefore follows that the trivial sums users of genetic-testing kits might derive from a system of data property rights would likely be dwarfed by the value they would enjoy from improved medical treatments. A strong case can be made that policymakers should prioritize advancing the emergence of new treatments, rather than attempting to ensure that consumers share in the profits generated by those potential advances.

These debates drew increased attention last year, when 23andMe signed a strategic agreement with the pharmaceutical company Almirall to license the rights related to an antibody Almirall had developed. Critics pointed out that 23andMe’s customers, whose data had presumably been used to discover the potential treatment, received no monetary benefits from the deal. Journalist Laura Spinney wrote in The Guardian newspaper:

23andMe, for example, asks its customers to waive all claims to a share of the profits arising from such research. But given those profits could be substantial—as evidenced by the interest of big pharma—shouldn’t the company be paying us for our data, rather than charging us to be tested?

In the deal’s wake, some argued that personal health data should be covered by property rights. A cardiologist quoted in Fortune magazine opined: “I strongly believe that everyone should own their medical data—and they have a right to that.” But this strong belief, however widely shared, ignores important lessons that law and economics has to teach about property rights and the role of contractual freedom.

Why Do We Have Property Rights?

Among the many important features of property rights is that they create “excludability,” the ability of economic agents to prevent third parties from using a given item. In the words of law professor Richard Epstein:

[P]roperty is not an individual conception, but is at root a social conception. The social conception is fairly and accurately portrayed, not by what it is I can do with the thing in question, but by who it is that I am entitled to exclude by virtue of my right. Possession becomes exclusive possession against the rest of the world…

Excludability helps to facilitate the trade of goods, offers incentives to create those goods in the first place, and promotes specialization throughout the economy. In short, property rights create a system of exclusion that supports creating and maintaining valuable goods, services, and ideas.

But property rights are not without drawbacks. Physical or intellectual property can lead to a suboptimal allocation of resources, namely market power (though this effect is often outweighed by increased ex ante incentives to create and innovate). Similarly, property rights can give rise to thickets that significantly increase the cost of amassing complementary pieces of property. Often cited are the historic (but contested) examples of tolling on the Rhine River or the airplane patent thicket of the early 20th century. Finally, strong property rights might also lead to holdout behavior, which can be addressed through top-down tools, like eminent domain, or private mechanisms, like contingent contracts.

In short, though property rights—whether they cover physical or information goods—can offer vast benefits, there are cases where they might be counterproductive. This is probably why, throughout history, property laws have evolved to achieve a reasonable balance between incentives to create goods and to ensure their efficient allocation and use.

Personal Health Data: What Are We Trying to Incentivize?

There are at least three critical questions we should ask about proposals to create property rights over personal health data.

  1. What goods or behaviors would these rights incentivize or disincentivize that are currently over- or undersupplied by the market?
  2. Are goods over- or undersupplied because of insufficient excludability?
  3. Could these rights undermine the efficient use of personal health data?

Much of the current debate centers on data obtained from direct-to-consumer genetic-testing kits. In this context, almost by definition, firms only obtain consumers’ genetic data with their consent. In western democracies, the rights to bodily integrity and to privacy generally make it illegal to administer genetic tests against a consumer or patient’s will. This makes genetic information naturally excludable, so consumers already benefit from what is effectively a property right.

When consumers decide to use a genetic-testing kit, the terms set by the testing firm generally stipulate how their personal data will be used. 23andMe has a detailed policy to this effect, as does Family Tree DNA. In the case of 23andMe, consumers can decide whether their personal information can be used for the purpose of scientific research:

You have the choice to participate in 23andMe Research by providing your consent. … 23andMe Research may study a specific group or population, identify potential areas or targets for therapeutics development, conduct or support the development of drugs, diagnostics or devices to diagnose, predict or treat medical or other health conditions, work with public, private and/or nonprofit entities on genetic research initiatives, or otherwise create, commercialize, and apply this new knowledge to improve health care.

Because this transfer of personal information is hardwired into the provision of genetic-testing services, there is space for contractual bargaining over the allocation of this information. The right to use personal health data will go toward the party that values it most, especially if information asymmetries are weeded out by existing regulations or business practices.

Regardless of data property rights, consumers have a choice: they can purchase genetic-testing services and agree to the provider’s data policy, or they can forgo the services. The service provider cannot obtain the data without entering into an agreement with the consumer. While competition between providers will affect parties’ bargaining positions, and thus the price and terms on which these services are provided, data property rights likely will not.

So, why do consumers transfer control over their genetic data? The main reason is that genetic information is inaccessible and worthless without the addition of genetic-testing services. Consumers must pass through the bottleneck of genetic testing for their genetic data to be revealed and transformed into usable information. It therefore makes sense to transfer the information to the service provider, who is in a much stronger position to draw insights from it. From the consumer’s perspective, the data is not even truly “transferred,” as the consumer had no access to it before the genetic-testing service revealed it. The value of this genetic information is then netted out in the price consumers pay for testing kits.

If personal health data were undersupplied by consumers and patients, testing firms could sweeten the deal and offer them more in return for their data. U.S. copyright law covers original compilations of data, while EU law gives 15 years of exclusive protection to the creators of original databases. Legal protections for trade secrets could also play some role. Thus, firms have some incentives to amass valuable health datasets.

But some critics argue that health data is, in fact, oversupplied. Generally, such arguments assert that agents do not account for the negative privacy externalities suffered by third-parties, such as adverse-selection problems in insurance markets. For example, Jay Pil Choi, Doh Shin Jeon, and Byung Cheol Kim argue:

Genetic tests are another example of privacy concerns due to informational externalities. Researchers have found that some subjects’ genetic information can be used to make predictions of others’ genetic disposition among the same racial or ethnic category.  … Because of practical concerns about privacy and/or invidious discrimination based on genetic information, the U.S. federal government has prohibited insurance companies and employers from any misuse of information from genetic tests under the Genetic Information Nondiscrimination Act (GINA).

But if these externalities exist (most of the examples cited by scholars are hypothetical), they are likely dwarfed by the tremendous benefits that could flow from the use of personal health data. Put differently, the assertion that “excessive” data collection may create privacy harms should be weighed against the possibility that the same collection may also lead to socially valuable goods and services that produce positive externalities.

In any case, data property rights would do little to limit these potential negative externalities. Consumers and patients are already free to agree to terms that allow or prevent their data from being resold to insurers. It is not clear how data property rights would alter the picture.

Proponents of data property rights often claim they should be associated with some form of collective bargaining. The idea is that consumers might otherwise fail to receive their “fair share” of genetic-testing firms’ revenue. But what critics portray as asymmetric bargaining power might simply be the market signaling that genetic-testing services are in high demand, with room for competitors to enter the market. Shifting rents from genetic-testing services to consumers would undermine this valuable price signal and, ultimately, diminish the quality of the services.

Perhaps more importantly, to the extent that they limit the supply of genetic information—for example, because firms are forced to pay higher prices for data and thus acquire less of it—data property rights might hinder the emergence of new treatments. If genetic data is a key input to develop personalized medicines, adopting policies that, in effect, ration the supply of that data is likely misguided.

Even if policymakers do not directly put their thumb on the scale, data property rights could still harm pharmaceutical innovation. If existing privacy regulations are any guide—notably, the previously mentioned GDPR and CCPA, as well as the federal Health Insurance Portability and Accountability Act (HIPAA)—such rights might increase red tape for pharmaceutical innovators. Privacy regulations routinely limit firms’ ability to put collected data to new and previously unforeseen uses. They also limit parties’ contractual freedom when it comes to gathering consumers’ consent.

At the margin, data property rights would make it more costly for firms to amass socially valuable datasets. This would effectively move the personalized medicine space further away from a world of permissionless innovation, thus slowing down medical progress.

In short, there is little reason to believe health-care data is misallocated. Proposals to reallocate rights to such data based on idiosyncratic distributional preferences threaten to stifle innovation in the name of privacy harms that remain mostly hypothetical.

Data Property Rights and COVID-19

The trade-off between users’ privacy and the efficient use of data also has important implications for the fight against COVID-19. Since the beginning of the pandemic, several promising initiatives have been thwarted by privacy regulations and concerns about the use of personal data. This has potentially prevented policymakers, firms, and consumers from putting information to its optimal social use. High-profile issues have included:

Each of these cases may involve genuine privacy risks. But to the extent that they do, those risks must be balanced against the potential benefits to society. If privacy concerns prevent us from deploying contact tracing or green passes at scale, we should question whether the privacy benefits are worth the cost. The same is true for rules that prohibit amassing more data than is strictly necessary, as is required by data-minimization obligations included in regulations such as the GDPR.

If our initial question was instead whether the benefits of a given data-collection scheme outweighed its potential costs to privacy, incentives could be set such that competition between firms would reduce the amount of data collected—at least, where minimized data collection is, indeed, valuable to users. Yet these considerations are almost completely absent in the COVID-19-related privacy debates, as they are in the broader privacy debate. Against this backdrop, the case for personal data property rights is dubious.

Conclusion

The key question is whether policymakers should make it easier or harder for firms and public bodies to amass large sets of personal data. This requires asking whether personal data is currently under- or over-provided, and whether the additional excludability that would be created by data property rights would offset their detrimental effect on innovation.

Swaths of personal data currently lie untapped. With the proper incentive mechanisms in place, this idle data could be mobilized to develop personalized medicines and to fight the COVID-19 outbreak, among many other valuable uses. By making such data more onerous to acquire, property rights in personal data might stifle the assembly of novel datasets that could be used to build innovative products and services.

On the other hand, when dealing with diffuse and complementary data sources, transaction costs become a real issue and the initial allocation of rights can matter a great deal. In such cases, unlike the genetic-testing kits example, it is not certain that users will be able to bargain with firms, especially where their personal information is exchanged by third parties.

If optimal reallocation is unlikely, should property rights go to the person covered by the data or to the collectors (potentially subject to user opt-outs)? Proponents of data property rights assume the first option is superior. But if the goal is to produce groundbreaking new goods and services, granting rights to data collectors might be a superior solution. Ultimately, this is an empirical question.

As Richard Epstein puts it, the goal is to “minimize the sum of errors that arise from expropriation and undercompensation, where the two are inversely related.” Rather than approach the problem with the preconceived notion that initial rights should go to users, policymakers should ensure that data flows to those economic agents who can best extract information and knowledge from it.

As things stand, there is little to suggest that the trade-offs favor creating data property rights. This is not an argument for requisitioning personal information or preventing parties from transferring data as they see fit, but simply for letting markets function, unfettered by misguided public policies.

The Federal Trade Commission and 46 state attorneys general (along with the District of Columbia and the Territory of Guam) filed their long-awaited complaints against Facebook Dec. 9. The crux of the arguments in both lawsuits is that Facebook pursued a series of acquisitions over the past decade that aimed to cement its prominent position in the “personal social media networking” market. 

Make no mistake, if successfully prosecuted, these cases would represent one of the most fundamental shifts in antitrust law since passage of the Hart-Scott-Rodino Act in 1976. That law required antitrust authorities to be notified of proposed mergers and acquisitions that exceed certain value thresholds, essentially shifting the paradigm for merger enforcement from ex-post to ex-ante review.

While the prevailing paradigm does not explicitly preclude antitrust enforcers from taking a second bite of the apple via ex-post enforcement, it has created an assumption among that regulatory clearance of a merger makes subsequent antitrust proceedings extremely unlikely. 

Indeed, the very point of ex-ante merger regulations is that ex-post enforcement, notably in the form of breakups, has tremendous social costs. It can scupper economies of scale and network effects on which both consumers and firms have come to rely. Moreover, the threat of costly subsequent legal proceedings will hang over firms’ pre- and post-merger investment decisions, and may thus reduce incentives to invest.

With their complaints, the FTC and state AGs threaten to undo this status quo. Even if current antitrust law allows it, pursuing this course of action threatens to quash the implicit assumption that regulatory clearance generally shields a merger from future antitrust scrutiny. Ex-post review of mergers and acquisitions does also entail some positive features, but the Facebook complaints fail to consider these complicated trade-offs. This oversight could hamper tech and other U.S. industries.

Mergers and uncertainty

Merger decisions are probabilistic. Of the thousands of corporate acquisitions each year, only a handful end up deemed “successful.” These relatively few success stories have to pay for the duds in order to preserve the incentive to invest.

Switching from ex-ante to ex-post review enables authorities to focus their attention on the most lucrative deals. It stands to reason that they will not want to launch ex-post antitrust proceedings against bankrupt firms whose assets have already been stripped. Instead, as with the Facebook complaint, authorities are far more likely to pursue high-profile cases that boost their political capital.

This would be unproblematic if:

  1. Authorities would commit to ex-post prosecution only of anticompetitive mergers; and
  2. If parties could reasonably anticipate whether their deals would be deemed anticompetitive in the future. 

If those were the conditions, ex-post enforcement would merely reduce the incentive to partake in problematic mergers. It would leave welfare-enhancing deals unscathed. But where firms could not have ex-ante knowledge that a given deal would be deemed anticompetitive, the associated error-costs should weigh against prosecuting such mergers ex post, even if such enforcement might appear desirable. The deterrent effect that would arise from such prosecutions would be applied by the market to all mergers, including efficient ones. Put differently, authorities might get the ex-post assessment right in one case, such as the Facebook proceedings, but the bigger picture remains that they could be wrong in many other cases. Firms will perceive this threat and it may hinder their investments.

There is also reason to doubt that either of the ideal conditions for ex-post enforcement could realistically be met in practice.Ex-ante merger proceedings involve significant uncertainty. Indeed, antitrust-merger clearance decisions routinely have an impact on the merging parties’ stock prices. If management and investors knew whether their transactions would be cleared, those effects would be priced-in when a deal is announced, not when it is cleared or blocked. Indeed, if firms knew a given merger would be blocked, they would not waste their resources pursuing it. This demonstrates that ex-ante merger proceedings involve uncertainty for the merging parties.

Unless the answer is markedly different for ex-post merger reviews, authorities should proceed with caution. If parties cannot properly self-assess their deals, the threat of ex-post proceedings will weigh on pre- and post-merger investments (a breakup effectively amounts to expropriating investments that are dependent upon the divested assets). 

Furthermore, because authorities will likely focus ex-post reviews on the most lucrative deals, their incentive effects can be particularly pronounced. Parties may fear that the most successful mergers will be broken up. This could have wide-reaching effects for all merging firms that do not know whether they might become “the next Facebook.” 

Accordingly, for ex-post merger reviews to be justified, it is essential that:

  1. Their outcomes be predictable for the parties; and that 
  2. Analyzing the deals after the fact leads to better decision-making (fewer false acquittals and convictions) than ex-ante reviews would yield.

If these conditions are not in place, ex-post assessments will needlessly weigh down innovation, investment and procompetitive merger activity in the economy.

Hindsight does not disentangle efficiency from market power

So, could ex-post merger reviews be so predictable and effective as to alleviate the uncertainties described above, along with the costs they entail? 

Based on the recently filed Facebook complaints, the answer appears to be no. We simply do not know what the counterfactual to Facebook’s acquisitions of Instagram and WhatsApp would look like. Hindsight does not tell us whether Facebook’s acquisitions led to efficiencies that allowed it to thrive (a pro-competitive scenario), or whether Facebook merely used these deals to kill off competitors and maintain its monopoly (an anticompetitive scenario).

As Sam Bowman and I have argued elsewhere, when discussing the leaked emails that spurred the current proceedings and on which the complaints rely heavily:

These email exchanges may not paint a particularly positive picture of Zuckerberg’s intent in doing the merger, and it is possible that at the time they may have caused antitrust agencies to scrutinise the merger more carefully. But they do not tell us that the acquisition was ultimately harmful to consumers, or about the counterfactual of the merger being blocked. While we know that Instagram became enormously popular in the years following the merger, it is not clear that it would have been just as successful without the deal, or that Facebook and its other products would be less popular today. 

Moreover, it fails to account for the fact that Facebook had the resources to quickly scale Instagram up to a level that provided immediate benefits to an enormous number of users, instead of waiting for the app to potentially grow to such scale organically.

In fact, contrary to what some have argued, hindsight might even complicate matters (again from Sam and me):

Today’s commentators have the benefit of hindsight. This inherently biases contemporary takes on the Facebook/Instagram merger. For instance, it seems almost self-evident with hindsight that Facebook would succeed and that entry in the social media space would only occur at the fringes of existing platforms (the combined Facebook/Instagram platform) – think of the emergence of TikTok. However, at the time of the merger, such an outcome was anything but a foregone conclusion.

In other words, ex-post reviews will, by definition, focus on mergers where today’s outcomes seem preordained — when, in fact, they were probabilistic. This will skew decisions toward finding anticompetitive conduct. If authorities think that Instagram was destined to become great, they are more likely to find that Facebook’s acquisition was anticompetitive because they implicitly dismiss the idea that it was the merger itself that made Instagram great.

Authorities might also confuse correlation for causality. For instance, the state AGs’ complaint ties Facebook’s acquisitions of Instagram and WhatsApp to the degradation of these services, notably in terms of privacy and advertising loads. As the complaint lays out:

127. Following the acquisition, Facebook also degraded Instagram users’ privacy by matching Instagram and Facebook Blue accounts so that Facebook could use information that users had shared with Facebook Blue to serve ads to those users on Instagram. 

180. Facebook’s acquisition of WhatsApp thus substantially lessened competition […]. Moreover, Facebook’s subsequent degradation of the acquired firm’s privacy features reduced consumer choice by eliminating a viable, competitive, privacy-focused option

But these changes may have nothing to do with Facebook’s acquisition of these services. At the time, nearly all tech startups focused on growth over profits in their formative years. It should be no surprise that the platforms imposed higher “prices” to users after their acquisition by Facebook; they were maturing. Further monetizing their platform would have been the logical next step, even absent the mergers.

It is just as hard to determine whether post-merger developments actually harmed consumers. For example, the FTC complaint argues that Facebook stopped developing its own photo-sharing capabilities after the Instagram acquisition,which the commission cites as evidence that the deal neutralized a competitor:

98. Less than two weeks after the acquisition was announced, Mr. Zuckerberg suggested canceling or scaling back investment in Facebook’s own mobile photo app as a direct result of the Instagram deal.

But it is not obvious that Facebook or consumers would have gained anything from the duplication of R&D efforts if Facebook continued to develop its own photo-sharing app. More importantly, this discontinuation is not evidence that Instagram could have overthrown Facebook. In other words, the fact that Instagram provided better photo-sharing capabilities does necessarily imply that it could also provide a versatile platform that posed a threat to Facebook.

Finally, if Instagram’s stellar growth and photo-sharing capabilities were certain to overthrow Facebook’s monopoly, why do the plaintiffs ignore the competitive threat posed by the likes of TikTok today? Neither of the complaints makes any mention of TikTok,even though it currently has well over 1 billion monthly active users. The FTC and state AGs would have us believe that Instagram posed an existential threat to Facebook in 2012 but that Facebook faces no such threat from TikTok today. It is exceedingly unlikely that both these statements could be true, yet both are essential to the plaintiffs’ case.

Some appropriate responses

None of this is to say that ex-post review of mergers and acquisitions should be categorically out of the question. Rather, such proceedings should be initiated only with appropriate caution and consideration for their broader consequences.

When undertaking reviews of past mergers, authorities do  not necessarily need to impose remedies every time they find a merger was wrongly cleared. The findings of these ex-post reviews could simply be used to adjust existing merger thresholds and presumptions. This would effectively create a feedback loop where false acquittals lead to meaningful policy reforms in the future. 

At the very least, it may be appropriate for policymakers to set a higher bar for findings of anticompetitive harm and imposition of remedies in such cases. This would reduce the undesirable deterrent effects that such reviews may otherwise entail, while reserving ex-post remedies for the most problematic cases.

Finally, a tougher system of ex-post review could be used to allow authorities to take more risks during ex-ante proceedings. Indeed, when in doubt, they could effectively  experiment by allowing  marginal mergers to proceed, with the understanding that bad decisions could be clawed back afterwards. In that regard, it might also be useful to set precise deadlines for such reviews and to outline the types of concerns that might prompt scrutiny  or warrant divestitures.

In short, some form of ex-post review may well be desirable. It could help antitrust authorities to learn what works and subsequently to make useful changes to ex-ante merger-review systems. But this would necessitate deep reflection on the many ramifications of ex-post reassessments. Legislative reform or, at the least, publication of guidance documents by authorities, seem like essential first steps. 

Unfortunately, this is the exact opposite of what the Facebook proceedings would achieve. Plaintiffs have chosen to ignore these complex trade-offs in pursuit of a case with extremely dubious underlying merits. Success for the plaintiffs would thus prove a pyrrhic victory, destroying far more than it intends to achieve.

What is a search engine?

Dirk Auer —  21 October 2020

What is a search engine? This might seem like an innocuous question, but it lies at the heart of the US Department of Justice and state Attorneys’ General antitrust complaint against Google, as well as the European Commission’s Google Search and Android decisions. It is also central to a report published by the UK’s Competition & Markets Authority (“CMA”). To varying degrees, all of these proceedings are premised on the assumption that Google enjoys a monopoly/dominant position over online search. But things are not quite this simple. 

Despite years of competition decisions and policy discussions, there are still many unanswered questions concerning the operation of search markets. For example, it is still unclear exactly which services compete against Google Search, and how this might evolve in the near future. Likewise, there has only been limited scholarly discussion as to how a search engine monopoly would exert its market power. In other words, what does a restriction of output look like on a search platform — particularly on the user side

Answering these questions will be essential if authorities wish to successfully bring an antitrust suit against Google for conduct involving search. Indeed, as things stand, these uncertainties greatly complicate efforts (i) to rigorously define the relevant market(s) in which Google Search operates, (ii) to identify potential anticompetitive effects, and (iii) to apply the quantitative tools that usually underpin antitrust proceedings.

In short, as explained below, antitrust authorities and other plaintiffs have their work cut out if they are to prevail in court.

Consumers demand information 

For a start, identifying the competitive constraints faced by Google presents authorities and plaintiffs with an important challenge.

Even proponents of antitrust intervention recognize that the market for search is complex. For instance, the DOJ and state AGs argue that Google dominates a narrow market for “general search services” — as opposed to specialized search services, content sites, social networks, and online marketplaces, etc. The EU Commission reached the same conclusion in its Google Search decision. Finally, commenting on the CMA’s online advertising report, Fiona Scott Morton and David Dinielli argue that: 

General search is a relevant market […]

In this way, an individual specialized search engine competes with a small fraction of what the Google search engine does, because a user could employ either for one specific type of search. The CMA concludes that, from the consumer standpoint, a specialized search engine exerts only a limited competitive constraint on Google.

(Note that the CMA stressed that it did not perform a market definition exercise: “We have not carried out a formal market definition assessment, but have instead looked at competitive constraints across the sector…”).

In other words, the above critics recognize that search engines are merely tools that can serve multiple functions, and that competitive constraints may be different for some of these. But this has wider ramifications that policymakers have so far overlooked. 

When quizzed about his involvement with Neuralink (a company working on implantable brain–machine interfaces), Elon Musk famously argued that human beings already share a near-symbiotic relationship with machines (a point already made by others):

The purpose of Neuralink [is] to create a high-bandwidth interface to the brain such that we can be symbiotic with AI. […] Because we have a bandwidth problem. You just can’t communicate through your fingers. It’s just too slow.

Commentators were quick to spot this implications of this technology for the search industry:

Imagine a world when humans would no longer require a device to search for answers on the internet, you just have to think of something and you get the answer straight in your head from the internet.

As things stand, this example still belongs to the realm of sci-fi. But it neatly illustrates a critical feature of the search industry. 

Search engines are just the latest iteration (but certainly not the last) of technology that enables human beings to access specific pieces of information more rapidly. Before the advent of online search, consumers used phone directories, paper maps, encyclopedias, and other tools to find the information they were looking for. They would read newspapers and watch television to know the weather forecast. They went to public libraries to undertake research projects (some still do), etc.

And, in some respects, the search engine is already obsolete for many of these uses. For instance, virtual assistants like Alexa, Siri, Cortana and Google’s own Google Assistant offering can perform many functions that were previously the preserve of search engines: checking the weather, finding addresses and asking for directions, looking up recipes, answering general knowledge questions, finding goods online, etc. Granted, these virtual assistants partly rely on existing search engines to complete tasks. However, Google is much less dominant in this space, and search engines are not the sole source on which virtual assistants rely to generate results. Amazon’s Alexa provides a fitting example (here and here).

Along similar lines, it has been widely reported that 60% of online shoppers start their search on Amazon, while only 26% opt for Google Search. In other words, Amazon’s ability to rapidly show users the product they are looking for somewhat alleviates the need for a general search engine. In turn, this certainly constrains Google’s behavior to some extent. And much of the same applies to other websites that provide a specific type of content (think of Twitter, LinkedIn, Tripadvisor, Booking.com, etc.)

Finally, it is also revealing that the most common searches on Google are, in all likelihood, made to reach other websites — a function for which competition is literally endless:

The upshot is that Google Search and other search engines perform a bundle of functions. Most of these can be done via alternative means, and this will increasingly be the case as technology continues to advance. 

This is all the more important given that the vast majority of search engine revenue derives from roughly 30 percent of search terms (notably those that are linked to product searches). The remaining search terms are effectively a loss leader. And these profitable searches also happen to be those where competition from alternative means is, in all likelihood, the strongest (this includes competition from online retail platforms, and online travel agents like Booking.com or Kayak, but also from referral sites, direct marketing, and offline sources). In turn, this undermines US plaintiffs’ claims that Google faces little competition from rivals like Amazon, because they don’t compete for the entirety of Google’s search results (in other words, Google might face strong competition for the most valuable ads):

108. […] This market share understates Google’s market power in search advertising because many search-advertising competitors offer only specialized search ads and thus compete with Google only in a limited portion of the market. 

Critics might mistakenly take the above for an argument that Google has no market power because competition is “just a click away”. But the point is more subtle, and has important implications as far as market definition is concerned.

Authorities should not define the search market by arguing that no other rival is quite like Google (or one if its rivals) — as the DOJ and state AGs did in their complaint:

90. Other search tools, platforms, and sources of information are not reasonable substitutes for general search services. Offline and online resources, such as books, publisher websites, social media platforms, and specialized search providers such as Amazon, Expedia, or Yelp, do not offer consumers the same breadth of information or convenience. These resources are not “one-stop shops” and cannot respond to all types of consumer queries, particularly navigational queries. Few consumers would find alternative sources a suitable substitute for general search services. Thus, there are no reasonable substitutes for general search services, and a general search service monopolist would be able to maintain quality below the level that would prevail in a competitive market. 

And as the EU Commission did in the Google Search decision:

(162) For the reasons set out below, there is, however, limited demand side substitutability between general search services and other online services. […]

(163) There is limited substitutability between general search services and content sites. […]

(166) There is also limited substitutability between general search services and specialised search services. […]

(178) There is also limited substitutability between general search services and social networking sites.

Ad absurdum, if consumers suddenly decided to access information via other means, Google could be the only firm to provide general search results and yet have absolutely no market power. 

Take the example of Yahoo: Despite arguably remaining the most successful “web directory”, it likely lost any market power that it had when Google launched a superior — and significantly more successful — type of search engine. Google Search may not have provided a complete, literal directory of the web (as did Yahoo), but it offered users faster access to the information they wanted. In short, the Yahoo example shows that being unique is not equivalent to having market power. Accordingly, any market definition exercise that merely focuses on the idiosyncrasies of firms is likely to overstate their actual market power. 

Given what precedes, the question that authorities should ask is thus whether Google Search (or another search engine) performs so many unique functions that it may be in a position to restrict output. So far, no one appears to have convincingly answered this question.

Similar uncertainties surround the question of how a search engine might restrict output, especially on the user side of the search market. Accordingly, authorities will struggle to produce evidence (i) the Google has market power, especially on the user side of the market, and (ii) that its behavior has anticompetitive effects.

Consider the following:

The SSNIP test (which is the standard method of defining markets in antitrust proceedings) is inapplicable to the consumer side of search platforms. Indeed, it is simply impossible to apply a hypothetical 10% price increase to goods that are given away for free.

This raises a deeper question: how would a search engine exercise its market power? 

For a start, it seems unlikely that it would start charging fees to its users. For instance, empirical research pertaining to the magazine industry (also an ad-based two-sided market) suggests that increased concentration does not lead to higher magazine prices. Minjae Song notably finds that:

Taking the advantage of having structural models for both sides, I calculate equilibrium outcomes for hypothetical ownership structures. Results show that when the market becomes more concentrated, copy prices do not necessarily increase as magazines try to attract more readers.

It is also far from certain that a dominant search engine would necessarily increase the amount of adverts it displays. To the contrary, market power on the advertising side of the platform might lead search engines to decrease the number of advertising slots that are available (i.e. reducing advertising output), thus showing less adverts to users. 

Finally, it is not obvious that market power would lead search engines to significantly degrade their product (as this could ultimately hurt ad revenue). For example, empirical research by Avi Goldfarb and Catherine Tucker suggests that there is some limit to the type of adverts that search engines could profitably impose upon consumers. They notably find that ads that are both obtrusive and targeted decrease subsequent purchases:

Ads that match both website content and are obtrusive do worse at increasing purchase intent than ads that do only one or the other. This failure appears to be related to privacy concerns: the negative effect of combining targeting with obtrusiveness is strongest for people who refuse to give their income and for categories where privacy matters most.

The preceding paragraphs find some support in the theoretical literature on two-sided markets literature, which suggests that competition on the user side of search engines is likely to be particularly intense and beneficial to consumers (because they are more likely to single-home than advertisers, and because each additional user creates a positive externality on the advertising side of the market). For instance, Jean Charles Rochet and Jean Tirole find that:

The single-homing side receives a large share of the joint surplus, while the multi-homing one receives a small share.

This is just a restatement of Mark Armstrong’s “competitive bottlenecks” theory:

Here, if it wishes to interact with an agent on the single-homing side, the multi-homing side has no choice but to deal with that agent’s chosen platform. Thus, platforms have monopoly power over providing access to their single-homing customers for the multi-homing side. This monopoly power naturally leads to high prices being charged to the multi-homing side, and there will be too few agents on this side being served from a social point of view (Proposition 4). By contrast, platforms do have to compete for the single-homing agents, and high profits generated from the multi-homing side are to a large extent passed on to the single-homing side in the form of low prices (or even zero prices).

All of this is not to suggest that Google Search has no market power, or that monopoly is necessarily less problematic in the search engine industry than in other markets. 

Instead, the argument is that analyzing competition on the user side of search platforms is unlikely to yield dispositive evidence of market power or anticompetitive effects. This is because market power is hard to measure on this side of the market, and because even a monopoly platform might not significantly restrict user output. 

That might explain why the DOJ and state AGs analysis of anticompetitive effects is so limited. Take the following paragraph (provided without further supporting evidence):

167. By restricting competition in general search services, Google’s conduct has harmed consumers by reducing the quality of general search services (including dimensions such as privacy, data protection, and use of consumer data), lessening choice in general search services, and impeding innovation. 

Given these inherent difficulties, antitrust investigators would do better to focus on the side of those platforms where mainstream IO tools are much easier to apply and where a dominant search engine would likely restrict output: the advertising market. Not only is it the market where search engines are most likely to exert their market power (thus creating a deadweight loss), but — because it involves monetary transactions — this side of the market lends itself to the application of traditional antitrust tools.  

Looking at the right side of the market

Finally, and unfortunately for Google’s critics, available evidence suggests that its position on the (online) advertising market might not meet the requirements necessary to bring a monopolization case (at least in the US).

For a start, online advertising appears to exhibit the prima facie signs of a competitive market. As Geoffrey Manne, Sam Bowman and Eric Fruits have argued:

Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.

Second, empirical research suggests that the market might need to be widened to include offline advertising. For instance, Avi Goldfarb and Catherine Tucker show that there can be important substitution effects between online and offline advertising channels:

Using data on the advertising prices paid by lawyers for 139 Google search terms in 195 locations, we exploit a natural experiment in “ambulance-chaser” regulations across states. When lawyers cannot contact clients by mail, advertising prices per click for search engine advertisements are 5%–7% higher. Therefore, online advertising substitutes for offline advertising.

Of course, a careful examination of the advertising industry could also lead authorities to define a narrower relevant market. For example, the DOJ and state AG complaint argued that Google dominated the “search advertising” market:

97. Search advertising in the United States is a relevant antitrust market. The search advertising market consists of all types of ads generated in response to online search queries, including general search text ads (offered by general search engines such as Google and Bing) […] and other, specialized search ads (offered by general search engines and specialized search providers such as Amazon, Expedia, or Yelp). 

Likewise, the European Commission concluded that Google dominated the market for “online search advertising” in the AdSense case (though the full decision has not yet been made public). Finally, the CMA’s online platforms report found that display and search advertising belonged to separate markets. 

But these are empirical questions that could dispositively be answered by applying traditional antitrust tools, such as the SSNIP test. And yet, there is no indication that the authorities behind the US complaint undertook this type of empirical analysis (and until its AdSense decision is made public, it is not clear that the EU Commission did so either). Accordingly, there is no guarantee that US courts will go along with the DOJ and state AGs’ findings.

In short, it is far from certain that Google currently enjoys an advertising monopoly, especially if the market is defined more broadly than that for “search advertising” (or the even narrower market for “General Search Text Advertising”). 

Concluding remarks

The preceding paragraphs have argued that a successful antitrust case against Google is anything but a foregone conclusion. In order to successfully bring a suit, authorities would notably need to figure out just what market it is that Google is monopolizing. In turn, that would require a finer understanding of what competition, and monopoly, look like in the search and advertising industries.

The writing is on the wall for Big Tech: regulation is coming. At least, that is what the House Judiciary Committee’s report into competition in digital markets would like us to believe. 

The Subcommittee’s Majority members, led by Rhode Island’s Rep. David Cicilline, are calling for a complete overhaul of America’s antitrust and regulatory apparatus. This would notably entail a break up of America’s largest tech firms, by prohibiting them from operating digital platforms and competing on them at the same time. Unfortunately, the report ignores the tremendous costs that such proposals would impose upon consumers and companies alike. 

For several years now, there has been growing pushback against the perceived “unfairness” of America’s tech industry: of large tech platforms favoring their own products at the expense of entrepreneurs who use their platforms; of incumbents acquiring startups to quash competition; of platforms overcharging  companies like Epic Games, Spotify, and the media, just because they can; and of tech companies that spy on their users and use that data to sell them things they don’t need. 

But this portrayal of America’s tech industry obscures an inconvenient possibility: supposing that these perceived ills even occur, there is every chance that the House’s reforms would merely exacerbate the status quo. The House report gives short shrift to this eventuality, but it should not.

Over the last decade, the tech sector has been the crown jewel of America’s economy. And while firms like Amazon, Google, Facebook, and Apple, may have grown at a blistering pace, countless others have flourished in their wake.

Google and Apple’s app stores have given rise to a booming mobile software industry. Platforms like Youtube and Instagram have created new venues for advertisers and ushered in a new generation of entrepreneurs including influencers, podcasters, and marketing experts. Social media platforms like Facebook and Twitter have disintermediated the production of news media, allowing ever more people to share their ideas with the rest of the world (mostly for better, and sometimes for worse). Amazon has opened up new markets for thousands of retailers, some of which are now going public. The recent $3.4 billion Snowflake IPO may have been the biggest public offering of a tech firm no one has heard of.

The trillion dollar question is whether it is possible to regulate this thriving industry without stifling its unparalleled dynamism. If Rep. Cicilline’s House report is anything to go by, the answer is a resounding no.

Acquisition by a Big Tech firm is one way for startups to rapidly scale and reach a wider audience, while allowing early investors to make a quick exit. Self-preferencing can enable platforms to tailor their services to the needs and desires of users (Apple and Google’s pre-installed app suites are arguably what drive users to opt for their devices). Excluding bad apples from a platform is essential to gain users’ trust and build a strong reputation. Finally, in the online retail space, copying rival products via house brands provides consumers with competitively priced goods and helps new distributors enter the market. 

All of these practices would either be heavily scrutinized or outright banned under the Subcommittee ’s proposed reforms. Beyond its direct impact on the quality of online goods and services, this huge shift would threaten the climate of permissionless innovation that has arguably been key to Silicon Valley’s success. 

More fundamentally, these reforms would mostly protect certain privileged rivals at the expense of the wider industry. Take Apple’s App Store: Epic Games and others have complained about the 30% Commission charged by Apple for in-app purchases (as is standard throughout the industry). Yet, as things stand, roughly 80% of apps pay no commission at all. Tackling this 30% commission — for instance by allowing developers to bypass Apple’s in-app payment processing — would almost certainly result in larger fees for small developers. In short, regulation could significantly impede smaller firms.

Fortunately, there is another way. For decades, antitrust law — guided by the judge-made consumer welfare standard — has been the cornerstone of economic policy in the US. During that time, America built a tech industry that is the envy of the world. This should give pause to would-be reformers. There is a real chance overbearing regulation will permanently hamper America’s tech industry. With competition from China more intense than ever, it is a risk that the US cannot afford to take.

Apple’s legal team will be relieved that “you reap what you sow” is just a proverb. After a long-running antitrust battle against Qualcomm unsurprisingly ended in failure, Apple now faces antitrust accusations of its own (most notably from Epic Games). Somewhat paradoxically, this turn of events might cause Apple to see its previous defeat in a new light. Indeed, the well-established antitrust principles that scuppered Apple’s challenge against Qualcomm will now be the rock upon which it builds its legal defense.

But while Apple’s reversal of fortunes might seem anecdotal, it neatly illustrates a fundamental – and often overlooked – principle of antitrust policy: Antitrust law is about maximizing consumer welfare. Accordingly, the allocation of surplus between two companies is only incidentally relevant to antitrust proceedings, and it certainly is not a goal in and of itself. In other words, antitrust law is not about protecting David from Goliath.

Jockeying over the distribution of surplus

Or at least that is the theory. In practice, however, most antitrust cases are but small parts of much wider battles where corporations use courts and regulators in order to jockey for market position and/or tilt the distribution of surplus in their favor. The Microsoft competition suits brought by the DOJ and the European commission (in the EU and US) partly originated from complaints, and lobbying, by Sun Microsystems, Novell, and Netscape. Likewise, the European Commission’s case against Google was prompted by accusations from Microsoft and Oracle, among others. The European Intel case was initiated following a complaint by AMD. The list goes on.

The last couple of years have witnessed a proliferation of antitrust suits that are emblematic of this type of power tussle. For instance, Apple has been notoriously industrious in using the court system to lower the royalties that it pays to Qualcomm for LTE chips. One of the focal points of Apple’s discontent was Qualcomm’s policy of basing royalties on the end-price of devices (Qualcomm charged iPhone manufacturers a 5% royalty rate on their handset sales – and Apple received further rebates):

“The whole idea of a percentage of the cost of the phone didn’t make sense to us,” [Apple COO Jeff Williams] said. “It struck at our very core of fairness. At the time we were making something really really different.”

This pricing dispute not only gave rise to high-profile court cases, it also led Apple to lobby Standard Developing Organizations (“SDOs”) in a partly successful attempt to make them amend their patent policies, so as to prevent this type of pricing. 

However, in a highly ironic turn of events, Apple now finds itself on the receiving end of strikingly similar allegations. At issue is the 30% commission that Apple charges for in app purchases on the iPhone and iPad. These “high” commissions led several companies to lodge complaints with competition authorities (Spotify and Facebook, in the EU) and file antitrust suits against Apple (Epic Games, in the US).

Of course, these complaints are couched in more sophisticated, and antitrust-relevant, reasoning. But that doesn’t alter the fact that these disputes are ultimately driven by firms trying to tilt the allocation of surplus in their favor (for a more detailed explanation, see Apple and Qualcomm).

Pushback from courts: The Qualcomm case

Against this backdrop, a string of recent cases sends a clear message to would-be plaintiffs: antitrust courts will not be drawn into rent allocation disputes that have no bearing on consumer welfare. 

The best example of this judicial trend is Qualcomm’s victory before the United States Court of Appeal for the 9th Circuit. The case centered on the royalties that Qualcomm charged to OEMs for its Standard Essential Patents (SEPs). Both the district court and the FTC found that Qualcomm had deployed a series of tactics (rebates, refusals to deal, etc) that enabled it to circumvent its FRAND pledges. 

However, the Court of Appeal was not convinced. It failed to find any consumer harm, or recognizable antitrust infringement. Instead, it held that the dispute at hand was essentially a matter of contract law:

To the extent Qualcomm has breached any of its FRAND commitments, a conclusion we need not and do not reach, the remedy for such a breach lies in contract and patent law. 

This is not surprising. From the outset, numerous critics pointed that the case lied well beyond the narrow confines of antitrust law. The scathing dissenting statement written by Commissioner Maureen Olhaussen is revealing:

[I]n the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections. 

In reaching its conclusion, the Court notably rejected the notion that SEP royalties should be systematically based upon the “Smallest Saleable Patent Practicing Unit” (or SSPPU):

Even if we accept that the modem chip in a cellphone is the cellphone’s SSPPU, the district court’s analysis is still fundamentally flawed. No court has held that the SSPPU concept is a per se rule for “reasonable royalty” calculations; instead, the concept is used as a tool in jury cases to minimize potential jury confusion when the jury is weighing complex expert testimony about patent damages.

Similarly, it saw no objection to Qualcomm licensing its technology at the OEM level (rather than the component level):

Qualcomm’s rationale for “switching” to OEM-level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,” the second element of the Aspen Skiing exception. Aerotec Int’l, 836 F.3d at 1184 (internal quotation marks and citation omitted). Instead, Qualcomm responded to the change in patent-exhaustion law by choosing the path that was “far more lucrative,” both in the short term and the long term, regardless of any impacts on competition. 

Finally, the Court concluded that a firm breaching its FRAND pledges did not automatically amount to anticompetitive conduct: 

We decline to adopt a theory of antitrust liability that would presume anticompetitive conduct any time a company could not prove that the “fair value” of its SEP portfolios corresponds to the prices the market appears willing to pay for those SEPs in the form of licensing royalty rates.

Taken together, these findings paint a very clear picture. The Qualcomm Court repeatedly rejected the radical idea that US antitrust law should concern itself with the prices charged by monopolists — as opposed to practices that allow firms to illegally acquire or maintain a monopoly position. The words of Learned Hand and those of Antonin Scalia (respectively, below) loom large:

The successful competitor, having been urged to compete, must not be turned upon when he wins. 

And,

To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Other courts (both in the US and abroad) have reached similar conclusions

For instance, a district court in Texas dismissed a suit brought by Continental Automotive Systems (which supplies electronic systems to the automotive industry) against a group of SEP holders. 

Continental challenged the patent holders’ decision to license their technology at the vehicle rather than component level (the allegation is very similar to the FTC’s complaint that Qualcomm licensed its SEPs at the OEM, rather than chipset level). However, following a forceful intervention by the DOJ, the Court ultimately held that the facts alleged by Continental were not indicative of antitrust injury. It thus dismissed the case.

Likewise, within weeks of the Qualcomm and Continental decisions, the UK Supreme court also ruled in favor of SEP holders. In its Unwired Planet ruling, the Court concluded that discriminatory licenses did not automatically infringe competition law (even though they might breach a firm’s contractual obligations):

[I]t cannot be said that there is any general presumption that differential pricing for licensees is problematic in terms of the public or private interests at stake.

In reaching this conclusion, the UK Supreme Court emphasized that the determination of whether licenses were FRAND, or not, was first and foremost a matter of contract law. In the case at hand, the most important guide to making this determination were the internal rules of the relevant SDO (as opposed to competition case law):

Since price discrimination is the norm as a matter of licensing practice and may promote objectives which the ETSI regime is intended to promote (such as innovation and consumer welfare), it would have required far clearer language in the ETSI FRAND undertaking to indicate an intention to impose the more strict, “hard-edged” non-discrimination obligation for which Huawei contends. Further, in view of the prevalence of competition laws in the major economies around the world, it is to be expected that any anti-competitive effects from differential pricing would be most appropriately addressed by those laws

All of this ultimately led the Court to rule in favor of Unwired Planet, thus dismissing Huawei’s claims that it had infringed competition law by breaching its FRAND pledges. 

In short, courts and antitrust authorities on both sides of the Atlantic have repeatedly, and unambiguously, concluded that pricing disputes (albeit in the specific context of technological standards) are generally a matter of contract law. Antitrust/competition law intercedes only when unfair/excessive/discriminatory prices are both caused by anticompetitive behavior and result in anticompetitive injury.

Apple’s Loss is… Apple’s gain.

Readers might wonder how the above cases relate to Apple’s app store. But, on closer inspection the parallels are numerous. As explained above, courts have repeatedly stressed that antitrust enforcement should not concern itself with the allocation of surplus between commercial partners. Yet that is precisely what Epic Game’s suit against Apple is all about.

Indeed, Epic’s central claim is not that it is somehow foreclosed from Apple’s App Store (for example, because Apple might have agreed to exclusively distribute the games of one of Epic’s rivals). Instead, all of its objections are down to the fact that it would like to access Apple’s store under more favorable terms:

Apple’s conduct denies developers the choice of how best to distribute their apps. Developers are barred from reaching over one billion iOS users unless they go through Apple’s App Store, and on Apple’s terms. […]

Thus, developers are dependent on Apple’s noblesse oblige, as Apple may deny access to the App Store, change the terms of access, or alter the tax it imposes on developers, all in its sole discretion and on the commercially devastating threat of the developer losing access to the entire iOS userbase. […]

By imposing its 30% tax, Apple necessarily forces developers to suffer lower profits, reduce the quantity or quality of their apps, raise prices to consumers, or some combination of the three.

And the parallels with the Qualcomm litigation do not stop there. Epic is effectively asking courts to make Apple monetize its platform at a different level than the one that it chose to maximize its profits (no more monetization at the app store level). Similarly, Epic Games omits any suggestion of profit sacrifice on the part of Apple — even though it is a critical element of most unilateral conduct theories of harm. Finally, Epic is challenging conduct that is both the industry norm and emerged in a highly competitive setting.

In short, all of Epic’s allegations are about monopoly prices, not monopoly maintenance or monopolization. Accordingly, just as the SEP cases discussed above were plainly beyond the outer bounds of antitrust enforcement (something that the DOJ repeatedly stressed with regard to the Qualcomm case), so too is the current wave of antitrust litigation against Apple. When all is said and done, Apple might thus be relieved that Qualcomm was victorious in their antitrust confrontation. Indeed, the legal principles that caused its demise against Qualcomm are precisely the ones that will, likely, enable it to prevail against Epic Games.