Archives For United Kingdom

The practice of so-called “self-preferencing” has come to embody the zeitgeist of competition policy for digital markets, as legislative initiatives are undertaken in jurisdictions around the world that to seek, in various ways, to constrain large digital platforms from granting favorable treatment to their own goods and services. The core concern cited by policymakers is that gatekeepers may abuse their dual role—as both an intermediary and a trader operating on the platform—to pursue a strategy of biased intermediation that entrenches their power in core markets (defensive leveraging) and extends it to associated markets (offensive leveraging).

In addition to active interventions by lawmakers, self-preferencing has also emerged as a new theory of harm before European courts and antitrust authorities. Should antitrust enforcers be allowed to pursue such a theory, they would gain significant leeway to bypass the legal standards and evidentiary burdens traditionally required to prove that a given business practice is anticompetitive. This should be of particular concern, given the broad range of practices and types of exclusionary behavior that could be characterized as self-preferencing—only some of which may, in some specific contexts, include exploitative or anticompetitive elements.

In a new working paper for the International Center for Law & Economics (ICLE), I provide an overview of the relevant traditional antitrust theories of harm, as well as the emerging case law, to analyze whether and to what extent self-preferencing should be considered a new standalone offense under EU competition law. The experience to date in European case law suggests that courts have been able to address platforms’ self-preferencing practices under existing theories of harm, and that it may not be sufficiently novel to constitute a standalone theory of harm.

European Case Law on Self-Preferencing

Practices by digital platforms that might be deemed self-preferencing first garnered significant attention from European competition enforcers with the European Commission’s Google Shopping investigation, which examined whether the search engine’s results pages positioned and displayed its own comparison-shopping service more favorably than the websites of rival comparison-shopping services. According to the Commission’s findings, Google’s conduct fell outside the scope of competition on the merits and could have the effect of extending Google’s dominant position in the national markets for general Internet search into adjacent national markets for comparison-shopping services, in addition to protecting Google’s dominance in its core search market.

Rather than explicitly posit that self-preferencing (a term the Commission did not use) constituted a new theory of harm, the Google Shopping ruling described the conduct as belonging to the well-known category of “leveraging.” The Commission therefore did not need to propagate a new legal test, as it held that the conduct fell under a well-established form of abuse. The case did, however, spur debate over whether the legal tests the Commission did apply effectively imposed on Google a principle of equal treatment of rival comparison-shopping services.

But it should be noted that conduct similar to that alleged in the Google Shopping investigation actually came before the High Court of England and Wales several months earlier, this time in a dispute between Google and Streetmap. At issue in that case was favorable search results Google granted to its own maps, rather than to competing online maps. The UK Court held, however, that the complaint should have been appropriately characterized as an allegation of discrimination; it further found that Google’s conduct did not constitute anticompetitive foreclosure. A similar result was reached in May 2020 by the Amsterdam Court of Appeal in the Funda case.  

Conversely, in June 2021, the French Competition Authority (AdlC) followed the European Commission into investigating Google’s practices in the digital-advertising sector. Like the Commission, the AdlC did not explicitly refer to self-preferencing, instead describing the conduct as “favoring.”

Given this background and the proliferation of approaches taken by courts and enforcers to address similar conduct, there was significant anticipation for the judgment that the European General Court would ultimately render in the appeal of the Google Shopping ruling. While the General Court upheld the Commission’s decision, it framed self-preferencing as a discriminatory abuse. Further, the Court outlined four criteria that differentiated Google’s self-preferencing from competition on the merits.

Specifically, the Court highlighted the “universal vocation” of Google’s search engine—that it is open to all users and designed to index results containing any possible content; the “superdominant” position that Google holds in the market for general Internet search; the high barriers to entry in the market for general search services; and what the Court deemed Google’s “abnormal” conduct—behaving in a way that defied expectations, given a search engine’s business model, and that changed after the company launched its comparison-shopping service.

While the precise contours of what the Court might consider discriminatory abuse aren’t yet clear, the decision’s listed criteria appear to be narrow in scope. This stands at odds with the much broader application of self-preferencing as a standalone abuse, both by the European Commission itself and by some national competition authorities (NCAs).

Indeed, just a few weeks after the General Court’s ruling, the Italian Competition Authority (AGCM) handed down a mammoth fine against Amazon over preferential treatment granted to third-party sellers who use the company’s own logistics and delivery services. Rather than reflecting the qualified set of criteria laid out by the General Court, the Italian decision was clearly inspired by the Commission’s approach in Google Shopping. Where the Commission described self-preferencing as a new form of leveraging abuse, AGCM characterized Amazon’s practices as tying.

Self-preferencing has also been raised as a potential abuse in the context of data and information practices. In November 2020, the European Commission sent Amazon a statement of objections detailing its preliminary view that the company had infringed antitrust rules by making systematic use of non-public business data, gathered from independent retailers who sell on Amazon’s marketplace, to advantage the company’s own retail business. (Amazon responded with a set of commitments currently under review by the Commission.)

Both the Commission and the U.K. Competition and Markets Authority have lodged similar allegations against Facebook over data gathered from advertisers and then used to compete with those advertisers in markets in which Facebook is active, such as classified ads. The Commission’s antitrust proceeding against Apple over its App Store rules likewise highlights concerns that the company may use its platform position to obtain valuable data about the activities and offers of its competitors, while competing developers may be denied access to important customer data.

These enforcement actions brought by NCAs and the Commission appear at odds with the more bounded criteria set out by the General Court in Google Shopping, and raise tremendous uncertainty regarding the scope and definition of the alleged new theory of harm.

Self-Preferencing, Platform Neutrality, and the Limits of Antitrust Law

The growing tendency to invoke self-preferencing as a standalone theory of antitrust harm could serve two significant goals for European competition enforcers. As mentioned earlier, it offers a convenient shortcut that could allow enforcers to skip the legal standards and evidentiary burdens traditionally required to prove anticompetitive behavior. Moreover, it can function, in practice, as a means to impose a neutrality regime on digital gatekeepers, with the aims of both ensuring a level playing field among competitors and neutralizing the potential conflicts of interests implicated by dual-mode intermediation.

The dual roles performed by some platforms continue to fuel the never-ending debate over vertical integration, as well as related concerns that, by giving preferential treatment to its own products and services, an integrated provider may leverage its dominance in one market to related markets. From this perspective, self-preferencing is an inevitable byproduct of the emergence of ecosystems.

However, as the Australian Competition and Consumer Commission has recognized, self-preferencing conduct is “often benign.” Furthermore, the total value generated by an ecosystem depends on the activities of independent complementors. Those activities are not completely under the platform’s control, although the platform is required to establish and maintain the governance structures regulating access to and interactions around that ecosystem.

Given this reality, a complete ban on self-preferencing may call the very existence of ecosystems into question, challenging their design and monetization strategies. Preferential treatment can take many different forms with many different potential effects, all stemming from platforms’ many different business models. This counsels for a differentiated, case-by-case, and effects-based approach to assessing the alleged competitive harms of self-preferencing.

Antitrust law does not impose on platforms a general duty to ensure neutrality by sharing their competitive advantages with rivals. Moreover, possessing a competitive advantage does not automatically equal an anticompetitive effect. As the European Court of Justice recently stated in Servizio Elettrico Nazionale, competition law is not intended to protect the competitive structure of the market, but rather to protect consumer welfare. Accordingly, not every exclusionary effect is detrimental to competition. Distinctions must be drawn between foreclosure and anticompetitive foreclosure, as only the latter may be penalized under antitrust.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

In Free to Choose, Milton Friedman famously noted that there are four ways to spend money[1]:

  1. Spending your own money on yourself. For example, buying groceries or lunch. There is a strong incentive to economize and to get full value.
  2. Spending your own money on someone else. For example, buying a gift for another. There is a strong incentive to economize, but perhaps less to achieve full value from the other person’s point of view. Altruism is admirable, but it differs from value maximization, since—strictly speaking—giving cash would maximize the other’s value. Perhaps the point of a gift is that it does not amount to cash and the maximization of the other person’s welfare from their point of view.
  3. Spending someone else’s money on yourself. For example, an expensed business lunch. “Pass me the filet mignon and Chateau Lafite! Do you have one of those menus without any prices?” There is a strong incentive to get maximum utility, but there is little incentive to economize.
  4. Spending someone else’s money on someone else. For example, applying the proceeds of taxes or donations. There may be an indirect desire to see utility, but incentives for quality and cost management are often diminished.

This framework can be criticized. Altruism has a role. Not all motives are selfish. There is an important role for action to help those less fortunate, which might mean, for instance, that a charity gains more utility from category (4) (assisting the needy) than from category (3) (the charity’s holiday party). It always depends on the facts and the context. However, there is certainly a grain of truth in the observation that charity begins at home and that, in the final analysis, people are best at managing their own affairs.

How would this insight apply to data interoperability? The difficult cases of assisting the needy do not arise here: there is no serious sense in which data interoperability does, or does not, result in destitution. Thus, Friedman’s observations seem to ring true: when spending data, those whose data it is seem most likely to maximize its value. This is especially so where collection of data responds to incentives—that is, the amount of data collected and processed responds to how much control over the data is possible.

The obvious exception to this would be a case of market power. If there is a monopoly with persistent barriers to entry, then the incentive may not be to maximize total utility, and therefore to limit data handling to the extent that a higher price can be charged for the lesser amount of data that does remain available. This has arguably been seen with some data-handling rules: the “Jedi Blue” agreement on advertising bidding, Apple’s Intelligent Tracking Prevention and App Tracking Transparency, and Google’s proposed Privacy Sandbox, all restrict the ability of others to handle data. Indeed, they may fail Friedman’s framework, since they amount to the platform deciding how to spend others’ data—in this case, by not allowing them to collect and process it at all.

It should be emphasized, though, that this is a special case. It depends on market power, and existing antitrust and competition laws speak to it. The courts will decide whether cases like Daily Mail v Google and Texas et al. v Google show illegal monopolization of data flows, so as to fall within this special case of market power. Outside the United States, cases like the U.K. Competition and Markets Authority’s Google Privacy Sandbox commitments and the European Union’s proposed commitments with Amazon seek to allow others to continue to handle their data and to prevent exclusivity from arising from platform dynamics, which could happen if a large platform prevents others from deciding how to account for data they are collecting. It will be recalled that even Robert Bork thought that there was risk of market power harms from the large Microsoft Windows platform a generation ago.[2] Where market power risks are proven, there is a strong case that data exclusivity raises concerns because of an artificial barrier to entry. It would only be if the benefits of centralized data control were to outweigh the deadweight loss from data restrictions that this would be untrue (though query how well the legal processes verify this).

Yet the latest proposals go well beyond this. A broad interoperability right amounts to “open season” for spending others’ data. This makes perfect sense in the European Union, where there is no large domestic technology platform, meaning that the data is essentially owned via foreign entities (mostly, the shareholders of successful U.S. and Chinese companies). It must be very tempting to run an industrial policy on the basis that “we’ll never be Google” and thus to embrace “sharing is caring” as to others’ data.

But this would transgress the warning from Friedman: would people optimize data collection if it is open to mandatory sharing even without proof of market power? It is deeply concerning that the EU’s DATA Act is accompanied by an infographic that suggests that coffee-machine data might be subject to mandatory sharing, to allow competition in services related to the data (e.g., sales of pods; spare-parts automation). There being no monopoly in coffee machines, this simply forces vertical disintegration of data collection and handling. Why put a data-collection system into a coffee maker at all, if it is to be a common resource? Friedman’s category (4) would apply: the data is taken and spent by another. There is no guarantee that there would be sensible decision making surrounding the resource.

It will be interesting to see how common-law jurisdictions approach this issue. At the risk of stating the obvious, the polity in continental Europe differs from that in the English-speaking democracies when it comes to whether the collective, or the individual, should be in the driving seat. A close read of the UK CMA’s Google commitments is interesting, in that paragraph 30 requires no self-preferencing in data collection and requires future data-handling systems to be designed with impacts on competition in mind. No doubt the CMA is seeking to prevent data-handling exclusivity on the basis that this prevents companies from using their data collection to compete. This is far from the EU DATA Act’s position in that it is certainly not a right to handle Google’s data: it is simply a right to continue to process one’s own data.

U.S. proposals are at an earlier stage. It would seem important, as a matter of principle, not to make arbitrary decisions about vertical integration in data systems, and to identify specific market-power concerns instead, in line with common-law approaches to antitrust.

It might be very attractive to the EU to spend others’ data on their behalf, but that does not make it right. Those working on the U.S. proposals would do well to ensure that there is a meaningful market-power gate to avoid unintended consequences.

Disclaimer: The author was engaged for expert advice relating to the UK CMA’s Privacy Sandbox case on behalf of the complainant Marketers for an Open Web.


[1] Milton Friedman, Free to Choose, 1980, pp.115-119

[2] Comments at the Yale Law School conference, Robert H. Bork’s influence on Antitrust Law, Sep. 27-28, 2013.

The wave of populist antitrust that has been embraced by regulators and legislators in the United States, United Kingdom, European Union, and other jurisdictions rests on the assumption that currently dominant platforms occupy entrenched positions that only government intervention can dislodge. Following this view, Facebook will forever dominate social networking, Amazon will forever dominate cloud computing, Uber and Lyft will forever dominate ridesharing, and Amazon and Netflix will forever dominate streaming. This assumption of platform invincibility is so well-established that some policymakers advocate significant interventions without making any meaningful inquiry into whether a seemingly dominant platform actually exercises market power.

Yet this assumption is not supported by historical patterns in platform markets. It is true that network effects drive platform markets toward “winner-take-most” outcomes. But the winner is often toppled quickly and without much warning. There is no shortage of examples.

In 2007, a columnist in The Guardian observed that “it may already be too late for competitors to dislodge MySpace” and quoted an economist as authority for the proposition that “MySpace is well on the way to becoming … a natural monopoly.” About one year later, Facebook had overtaken MySpace “monopoly” in the social-networking market. Similarly, it was once thought that Blackberry would forever dominate the mobile-communications device market, eBay would always dominate the online e-commerce market, and AOL would always dominate the internet-service-portal market (a market that no longer even exists). The list of digital dinosaurs could go on.

All those tech leaders were challenged by entrants and descended into irrelevance (or reduced relevance, in eBay’s case). This occurred through the force of competition, not government intervention.

Why This Time is Probably Not Different

Given this long line of market precedents, current legislative and regulatory efforts to “restore” competition through extensive intervention in digital-platform markets require that we assume that “this time is different.” Just as that slogan has been repeatedly rebutted in the financial markets, so too is it likely to be rebutted in platform markets. 

There is already supporting evidence. 

In the cloud market, Amazon’s AWS now faces vigorous competition from Microsoft Azure and Google Cloud. In the streaming market, Amazon and Netflix face stiff competition from Disney+ and Apple TV+, just to name a few well-resourced rivals. In the social-networking market, Facebook now competes head-to-head with TikTok and seems to be losing. The market power once commonly attributed to leading food-delivery platforms such as Grubhub, UberEats, and DoorDash is implausible after persistent losses in most cases, and the continuous entry of new services into a rich variety of local and product-market niches.

Those who have advocated antitrust intervention on a fast-track schedule may remain unconvinced by these inconvenient facts. But the market is not. 

Investors have already recognized Netflix’s vulnerability to competition, as reflected by a 35% fall in its stock price on April 20 and a decline of more than 60% over the past 12 months. Meta, Facebook’s parent, also experienced a reappraisal, falling more than 26% on Feb. 3 and more than 35% in the past 12 months. Uber, the pioneer of the ridesharing market, has declined by almost 50% over the past 12 months, while Lyft, its principal rival, has lost more than 60% of its value. These price freefalls suggest that antitrust populists may be pursuing solutions to a problem that market forces are already starting to address.

The Forgotten Curse of the Incumbent

For some commentators, the sharp downturn in the fortunes of the so-called “Big Tech” firms would not come as a surprise.

It has long been observed by some scholars and courts that a dominant firm “carries the seeds of its own destruction”—a phrase used by then-professor and later-Judge Richard Posner, writing in the University of Chicago Law Review in 1971. The reason: a dominant firm is liable to exhibit high prices, mediocre quality, or lackluster innovation, which then invites entry by more adept challengers. However, this view has been dismissed as outdated in digital-platform markets, where incumbents are purportedly protected by network effects and switching costs that make it difficult for entrants to attract users. Depending on the set of assumptions selected by an economic modeler, each contingency is equally plausible in theory.

The plunging values of leading platforms supplies real-world evidence that favors the self-correction hypothesis. It is often overlooked that network effects can work in both directions, resulting in a precipitous fall from market leader to laggard. Once users start abandoning a dominant platform for a new competitor, network effects operating in reverse can cause a “run for the exits” that leaves the leader with little time to recover. Just ask Nokia, the world’s leading (and seemingly unbeatable) smartphone brand until the Apple iPhone came along.

Why Market Self-Correction Outperforms Regulatory Correction

Market self-correction inherently outperforms regulatory correction: it operates far more rapidly and relies on consumer preferences to reallocate market leadership—a result perfectly consistent with antitrust’s mission to preserve “competition on the merits.” In contrast, policymakers can misdiagnose the competitive effects of business practices; are susceptible to the influence of private interests (especially those that are unable to compete on the merits); and often mispredict the market’s future trajectory. For Exhibit A, see the protracted antitrust litigation by the U.S. Department against IBM, which started in 1975 and ended in withdrawal of the suit in 1982. Given the launch of the Apple II in 1977, the IBM PC in 1981, and the entry of multiple “PC clones,” the forces of creative destruction swiftly displaced IBM from market leadership in the computing industry.

Regulators and legislators around the world have emphasized the urgency of taking dramatic action to correct claimed market failures in digital environments, casting aside prudential concerns over the consequences if any such failure proves to be illusory or temporary. 

But the costs of regulatory failure can be significant and long-lasting. Markets must operate under unnecessary compliance burdens that are difficult to modify. Regulators’ enforcement resources are diverted, and businesses are barred from adopting practices that would benefit consumers. In particular, proposed breakup remedies advocated by some policymakers would undermine the scale economies that have enabled platforms to push down prices, an important consideration in a time of accelerating inflation.

Conclusion

The high concentration levels and certain business practices in digital-platform markets certainly raise important concerns as a matter of antitrust (as well as privacy, intellectual property, and other bodies of) law. These concerns merit scrutiny and may necessitate appropriately targeted interventions. Yet, any policy steps should be anchored in the factually grounded analysis that has characterized decades of regulatory and judicial action to implement the antitrust laws with appropriate care. Abandoning this nuanced framework for a blunt approach based on reflexive assumptions of market power is likely to undermine, rather than promote, the public interest in competitive markets.

Recent antitrust forays on both sides of the Atlantic have unfortunate echoes of the oldie-but-baddie “efficiencies offense” that once plagued American and European merger analysis (and, more broadly, reflected a “big is bad” theory of antitrust). After a very short overview of the history of merger efficiencies analysis under American and European competition law, we briefly examine two current enforcement matters “on both sides of the pond” that impliedly give rise to such a concern. Those cases may regrettably foreshadow a move by enforcers to downplay the importance of efficiencies, if not openly reject them.

Background: The Grudging Acceptance of Merger Efficiencies

Not long ago, economically literate antitrust teachers in the United States enjoyed poking fun at such benighted 1960s Supreme Court decisions as Procter & Gamble (following in the wake of Brown Shoe andPhiladelphia National Bank). Those holdings—which not only rejected efficiencies justifications for mergers, but indeed “treated efficiencies more as an offense”—seemed a thing of the past, put to rest by the rise of an economic approach to antitrust. Several early European Commission merger-control decisions also arguably embraced an “efficiencies offense.”  

Starting in the 1980s, the promulgation of increasingly economically sophisticated merger guidelines in the United States led to the acceptance of efficiencies (albeit less then perfectly) as an important aspect of integrated merger analysis. Several practitioners have claimed, nevertheless, that “efficiencies are seldom credited and almost never influence the outcome of mergers that are otherwise deemed anticompetitive.” Commissioner Christine Wilson has argued that the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) still have work to do in “establish[ing] clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.”

In its first few years of merger review, which was authorized in 1989, the European Commission was hostile to merger-efficiency arguments.  In 2004, however, the EC promulgated horizontal merger guidelines that allow for the consideration of efficiencies, but only if three cumulative conditions (consumer benefit, merger specificity, and verifiability) are satisfied. A leading European competition practitioner has characterized several key European Commission merger decisions in the last decade as giving rather short shrift to efficiencies. In light of that observation, the practitioner has advocated that “the efficiency offence theory should, once again, be repudiated by the Commission, in order to avoid deterring notifying parties from bringing forward perfectly valid efficiency claims.”

In short, although the actual weight enforcers accord to efficiency claims is a matter of debate, efficiency justifications are cognizable, subject to constraints, as a matter of U.S. and European Union merger-enforcement policy. Whether that will remain the case is, unfortunately, uncertain, given DOJ and FTC plans to revise merger guidelines, as well as EU talk of convergence with U.S. competition law.

Two Enforcement Matters with ‘Efficiencies Offense’ Overtones

Two Facebook-related matters currently before competition enforcers—one in the United States and one in the United Kingdom—have implications for the possible revival of an antitrust “efficiencies offense” as a “respectable” element of antitrust policy. (I use the term Facebook to reference both the platform company and its corporate parent, Meta.)

FTC v. Facebook

The FTC’s 2020 federal district court monopolization complaint against Facebook, still in the motion to dismiss the amended complaint phase (see here for an overview of the initial complaint and the judge’s dismissal of it), rests substantially on claims that Facebook’s acquisitions of Instagram and WhatsApp harmed competition. As Facebook points out in its recent reply brief supporting its motion to dismiss the FTC’s amended complaint, Facebook appears to be touting merger-related efficiencies in critiquing those acquisitions. Specifically:

[The amended complaint] depends on the allegation that Facebook’s expansion of both Instagram and WhatsApp created a “protective ‘moat’” that made it harder for rivals to compete because Facebook operated these services at “scale” and made them attractive to consumers post-acquisition. . . . The FTC does not allege facts that, left on their own, Instagram and WhatsApp would be less expensive (both are free; Facebook made WhatsApp free); or that output would have been greater (their dramatic expansion at “scale” is the linchpin of the FTC’s “moat” theory); or that the products would be better in any specific way.

The FTC’s concerns about a scale-based merger-related output expansion that benefited consumers and thereby allegedly enhanced Facebook’s market position eerily echoes the commission’s concerns in Procter & Gamble that merger-related cost-reducing joint efficiencies in advertising had an anticompetitive “entrenchment” effect. Both positions, in essence, characterize output-increasing efficiencies as harmful to competition: in other words, as “efficiencies offenses.”

UK Competition and Markets Authority (CMA) v. Facebook

The CMA announced Dec. 1 that it had decided to block retrospectively Facebook’s 2020 acquisition of Giphy, which is “a company that provides social media and messaging platforms with animated GIF images that users can embed in posts and messages. . . .  These platforms license the use of Giphy for its users.”

The CMA theorized that Facebook could harm competition by (1) restricting access to Giphy’s digital libraries to Facebook’s competitors; and (2) prevent Giphy from developing into a potential competitor to Facebook’s display advertising business.

As a CapX analysis explains, the CMA’s theory of harm to competition, based on theoretical speculation, is problematic. First, a behavioral remedy short of divestiture, such as requiring Facebook to maintain open access to its gif libraries, would deal with the threat of restricted access. Indeed, Facebook promised at the time of the acquisition that Giphy would maintain its library and make it widely available. Second, “loss of a single, relatively small, potential competitor out of many cannot be counted as a significant loss for competition, since so many other potential and actual competitors remain.” Third, given the purely theoretical and questionable danger to future competition, the CMA “has blocked this deal on relatively speculative potential competition grounds.”

Apart from the weakness of the CMA’s case for harm to competition, the CMA appears to ignore a substantial potential dynamic integrative efficiency flowing from Facebook’s acquisition of Giphy. As David Teece explains:

Facebook’s acquisition of Giphy maintained Giphy’s assets and furthered its innovation in Facebook’s ecosystem, strengthening that ecosystem in competition with others; and via Giphy’s APIs, strengthening the ecosystems of other service providers as well.

There is no evidence that CMA seriously took account of this integrative efficiency, which benefits consumers by offering them a richer experience from Facebook and its subsidiary Instagram, and which spurs competing ecosystems to enhance their offerings to consumers as well. This is a failure to properly account for an efficiency. Moreover, to the extent that the CMA viewed these integrative benefits as somehow anticompetitive (to the extent that it enhanced Facebook’s competitive position) the improvement of Facebook’s ecosystem could have been deemed a type of “efficiencies offense.”

Are the Facebook Cases Merely Random Straws in the Wind?

It might appear at first blush to be reading too much into the apparent slighting of efficiencies in the two current Facebook cases. Nevertheless, recent policy rhetoric suggests that economic efficiencies arguments (whose status was tenuous at enforcement agencies to begin with) may actually be viewed as “offensive” by the new breed of enforcers.

In her Sept. 22 policy statement on “Vision and Priorities for the FTC,” Chair Lina Khan advocated focusing on the possible competitive harm flowing from actions of “gatekeepers and dominant middlemen,” and from “one-sided [vertical] contract provisions” that are “imposed by dominant firms.” No suggestion can be found in the statement that such vertical relationships often confer substantial benefits on consumers. This hints at a new campaign by the FTC against vertical restraints (as opposed to an emphasis on clearly welfare-inimical conduct) that could discourage a wide range of efficiency-producing contracts.

Chair Khan also sponsored the FTC’s July 2021 rescission of its Section 5 Policy Statement on Unfair Methods of Competition, which had emphasized the primacy of consumer welfare as the guiding principle underlying FTC antitrust enforcement. A willingness to set aside (or place a lower priority on) consumer welfare considerations suggests a readiness to ignore efficiency justifications that benefit consumers.

Even more troubling, a direct attack on the consideration of efficiencies is found in the statement accompanying the FTC’s September 2021 withdrawal of the 2020 Vertical Merger Guidelines:

The statement by the FTC majority . . . notes that the 2020 Vertical Merger Guidelines had improperly contravened the Clayton Act’s language with its approach to efficiencies, which are not recognized by the statute as a defense to an unlawful merger. The majority statement explains that the guidelines adopted a particularly flawed economic theory regarding purported pro-competitive benefits of mergers, despite having no basis of support in the law or market reality.

Also noteworthy is Khan’s seeming interest (found in her writings here, here, and here) in reviving Robinson-Patman Act enforcement. What’s worse, President Joe Biden’s July 2021 Executive Order on Competition explicitly endorses FTC investigation of “retailers’ practices on the conditions of competition in the food industries, including any practices that may violate [the] Robinson-Patman Act” (emphasis added). Those troubling statements from the administration ignore the widespread scholarly disdain for Robinson-Patman, which is almost unanimously viewed as an attack on efficiencies in distribution. For example, in recommending the act’s repeal in 2007, the congressionally established Antitrust Modernization Commission stressed that the act “protects competitors against competition and punishes the very price discounting and innovation and distribution methods that the antitrust otherwise encourage.”

Finally, newly confirmed Assistant Attorney General for Antitrust Jonathan Kanter (who is widely known as a Big Tech critic) has expressed his concerns about the consumer welfare standard and the emphasis on economics in antitrust analysis. Such concerns also suggest, at least by implication, that the Antitrust Division under Kanter’s leadership may manifest a heightened skepticism toward efficiencies justifications.

Conclusion

Recent straws in the wind suggest that an anti-efficiencies hay pile is in the works. Although antitrust agencies have not yet officially rejected the consideration of efficiencies, nor endorsed an “efficiencies offense,” the signs are troubling. Newly minted agency leaders’ skepticism toward antitrust economics, combined with their de-emphasis of the consumer welfare standard and efficiencies (at least in the merger context), suggest that even strongly grounded efficiency explanations may be summarily rejected at the agency level. In foreign jurisdictions, where efficiencies are even less well-established, and enforcement based on mere theory (as opposed to empiricism) is more widely accepted, the outlook for efficiencies stories appears to be no better.     

One powerful factor, however, should continue to constrain the anti-efficiencies movement, at least in the United States: the federal courts. As demonstrated most recently in the 9th U.S. Circuit Court of Appeals’ FTC v. Qualcomm decision, American courts remain committed to insisting on empirical support for theories of harm and on seriously considering business justifications for allegedly suspect contractual provisions. (The role of foreign courts in curbing prosecutorial excesses not grounded in economics, and in weighing efficiencies, depends upon the jurisdiction, but in general such courts are far less of a constraint on enforcers than American tribunals.)

While the DOJ and FTC (and, perhaps to a lesser extent, foreign enforcers) will have to keep the judiciary in mind in deciding to bring enforcement actions, the denigration of efficiencies by the agencies still will have an unfortunate demonstration effect on the private sector. Given the cost (both in resources and in reputational capital) associated with antitrust investigations, and the inevitable discounting for the risk of projects caught up in such inquiries, a publicly proclaimed anti-efficiencies enforcement philosophy will do damage. On the margin, it will lead businesses to introduce fewer efficiency-seeking improvements that could be (wrongly) characterized as “strengthening” or “entrenching” market dominance. Such business decisions, in turn, will be welfare-inimical; they will deny consumers the benefit of efficiencies-driven product and service enhancements, and slow the rate of business innovation.

As such, it is to be hoped that, upon further reflection, U.S. and foreign competition enforcers will see the light and publicly proclaim that they will fully weigh efficiencies in analyzing business conduct. The “efficiencies offense” was a lousy tune. That “oldie-but-baddie” should not be replayed.

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.

The European Commission plans to require Apple to use USB-C charging ports on iPhones to allow interoperability among different chargers (to save, the Commission estimates, two grams of waste per-European per-year). Interoperability demands for forms of interoperability have been at the center of at least two major lawsuits: Epic’s case against Apple and a separate lawsuit against Apple by the app called Coronavirus Reporter. In July, a group of pro-intervention academics published a white paper calling interoperability “the ‘Super Tool’ of Digital Platform Governance.”

What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.

Why Isn’t Everything Interoperable?

The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.

And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.

But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.

The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.

Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.

Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.

There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.

In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).

But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.

Interoperability for Digital Platforms

Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.

It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.

A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.

Interoperability and Contact-Tracing Apps

A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.

No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.

In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.

It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.

A ‘Super Tool’ for Digital Market Intervention?

The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.

The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.

The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.

Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.

Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”

But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “​​the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.

Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.

The authors completely ignore that a smart home interoperability standard has already been developed, backed by a group of 170 companies that include Amazon, Apple, and Google, as well as SmartThings, IKEA, and Samsung. It is open source and, in principle, should allow a Google Home speaker to work with, say, an Amazon Ring doorbell. In markets where consumers really do want interoperability, it can emerge without a regulator requiring it, even if some companies have apparent incentive not to offer it.

If You Build It, They Still Might Not Come

Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.

In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?

None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.

The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.

[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]

Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.

Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed., strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.

I will highlight five areas of concern with the DMU proposal.

I. Chilling Effects

The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.

Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.

II. SMS Designations Should Not Apply to the Whole Firm

The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.

Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.

Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.

III. The DMU Cannot Be Evidence-based Given its Goals and Objectives

The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.”[1] DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency.[2] Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.

Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.

Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.

Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.

IV. Speed of Proposals

Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.

As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.

But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.

The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar.  Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.

V. Antitrust Is Not Always the Answer

Although one can hardly disagree with Mr. Coscelli’s premise that the digital economy raises new legal questions and practical challenges, it is far from clear that competition law is the answer to them all. Some commentators of late are proposing to use competition law to solve consumer protection and even labor market problems. Unfortunately, this theme also recurs in Mr. Coscelli’s lecture. He discusses concerns with data privacy and fair and reasonable contract terms, but those have long been the province of consumer protection and contract law; a government does not need to step in and regulate all realms of activity by digital firms and call it competition law. Nor is there reason to confine needed protections of data privacy or fair terms of use to SMS firms.

Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal   to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.

Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.

There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.

Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.

In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.


[1] See Digital markets Taskforce, A new pro-competition regime for digital markets, at 22, Dec. 2020, available at: https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/Digital_Taskforce_-_Advice.pdf; Oliver Dowden & Kwasi Kwarteng, A New Pro-competition Regime for Digital Markets, July 2021, available from: https://www.gov.uk/government/consultations/a-new-pro-competition-regime-for-digital-markets, at ¶ 27.

[2] Sam Bowman, Sam Dumitriu & Aria Babu, Conflicting Missions:The Risks of the Digital Markets Unit to Competition and Innovation, Int’l Center for L. & Econ., June 2021, at 13.

There has been a rapid proliferation of proposals in recent years to closely regulate competition among large digital platforms. The European Union’s Digital Markets Act (DMA, which will become effective in 2023) imposes a variety of data-use, interoperability, and non-self-preferencing obligations on digital “gatekeeper” firms. A host of other regulatory schemes are being considered in Australia, France, Germany, and Japan, among other countries (for example, see here). The United Kingdom has established a Digital Markets Unit “to operationalise the future pro-competition regime for digital markets.” Recently introduced U.S. Senate and House Bills—although touted as “antitrust reform” legislation—effectively amount to “regulation in disguise” of disfavored business activities by very large companies,  including the major digital platforms (see here and here).

Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example). Without evidence, new regulatory initiatives are put forth as superior to long-established, consumer-based antitrust law enforcement.

The hope that new regulatory tools will somehow “solve” digital market competitive “problems” stems from the untested assumption that established consumer welfare-based antitrust enforcement is “not up to the task.” Untested assumptions, however, are an unsound guide to public policy decisions. Rather, in order to optimize welfare, all proposed government interventions in the economy, including regulation and antitrust, should be subject to decision-theoretic analysis that is designed to minimize the sum of error and decision costs (see here). What might such an analysis reveal?

Wonder no more. In a just-released Mercatus Center Working Paper, Professor Thom Lambert has conducted a decision-theoretic analysis that evaluates the relative merits of U.S. consumer welfare-based antitrust, ex ante regulation, and ongoing agency oversight in addressing the market power of large digital platforms. While explaining that antitrust and its alternatives have their respective costs and benefits, Lambert concludes that antitrust is the welfare-superior approach to dealing with platform competition issues. According to Lambert:

This paper provides a comparative institutional analysis of the leading approaches to addressing the market power of large digital platforms: (1) the traditional US antitrust approach; (2) imposition of ex ante conduct rules such as those in the EU’s Digital Markets Act and several bills recently advanced by the Judiciary Committee of the US House of Representatives; and (3) ongoing agency oversight, exemplified by the UK’s newly established “Digital Markets Unit.” After identifying the advantages and disadvantages of each approach, this paper examines how they might play out in the context of digital platforms. It first examines whether antitrust is too slow and indeterminate to tackle market power concerns arising from digital platforms. It next considers possible error costs resulting from the most prominent proposed conduct rules. It then shows how three features of the agency oversight model—its broad focus, political susceptibility, and perpetual control—render it particularly vulnerable to rent-seeking efforts and agency capture. The paper concludes that antitrust’s downsides (relative indeterminacy and slowness) are likely to be less significant than those of ex ante conduct rules (large error costs resulting from high informational requirements) and ongoing agency oversight (rent-seeking and agency capture).

Lambert’s analysis should be carefully consulted by American legislators and potential rule-makers (including at the Federal Trade Commission) before they institute digital platform regulation. One also hopes that enlightened foreign competition officials will also take note of Professor Lambert’s well-reasoned study. 

Digital advertising is the economic backbone of the Internet. It allows websites and apps to monetize their userbase without having to charge them fees, while the emergence of targeted ads allows this to be accomplished affordably and with less wasted time wasted.

This advertising is facilitated by intermediaries using the “adtech stack,” through which advertisers and publishers are matched via auctions and ads ultimately are served to relevant users. This intermediation process has advanced enormously over the past three decades. Some now allege, however, that this market is being monopolized by its largest participant: Google.

A lawsuit filed by the State of Texas and nine other states in December 2020 alleges, among other things, that Google has engaged in anticompetitive conduct related to its online display advertising business. Those 10 original state plaintiffs were joined by another four states and the Commonwealth of Puerto Rico in March 2021, while South Carolina and Louisiana have also moved to be added as additional plaintiffs. Google also faces a pending antitrust lawsuit brought by the U.S. Justice Department (DOJ) and 14 states (originally 11) related to the company’s distribution agreements, as well as a separate action by the State of Utah, 35 other states, and the District of Columbia related to its search design.

In recent weeks, it has been reported that the DOJ may join the Texas suit or bring its own similar action against Google in the coming months. If it does, it should learn from the many misconceptions and errors in the Texas complaint that leave it on dubious legal and economic grounds.

​​Relevant market

The Texas complaint identifies at least five relevant markets within the adtech stack that it alleges Google either is currently monopolizing or is attempting to monopolize:

  1. Publisher ad servers;
  2. Display ad exchanges;
  3. Display ad networks;
  4. Ad-buying tools for large advertisers; and
  5. Ad-buying tools for small advertisers.

None of these constitute an economically relevant product market for antitrust purposes, since each “market” is defined according to how superficially similar the products are in function, not how substitutable they are. Nevertheless, the Texas complaint vaguely echoes how markets were conceived in the “Roadmap” for a case against Google’s advertising business, published last year by the Omidyar Network, which may ultimately influence any future DOJ complaint, as well.

The Omidyar Roadmap narrows the market from media advertising to digital advertising, then to the open supply of display ads, which comprises only 9% of the total advertising spending and less than 20% of digital advertising, as shown in the figure below. It then further narrows the defined market to the intermediation of the open supply of display ads. Once the market has been sufficiently narrowed, the Roadmap authors conclude that Google’s market share is “perhaps sufficient to confer market power.”

While whittling down the defined market may achieve the purposes of sketching a roadmap to prosecute Google, it also generates a mishmash of more than a dozen relevant markets for digital display and video advertising. In many of these, Google doesn’t have anything approaching market power, while, in some, Facebook is the most dominant player.

The Texas complaint adopts a non-economic approach to market definition.  It ignores potential substitutability between different kinds of advertising, both online and offline, which can serve as a competitive constraint on the display advertising market. The complaint considers neither alternative forms of display advertising, such as social media ads, nor alternative forms of advertising, such as search ads or non-digital ads—all of which can and do act as substitutes. It is possible, at the very least, that advertisers who choose to place ads on third-party websites may switch to other forms of advertising if the price of third-party website advertising was above competitive levels. To ignore this possibility, as the Texas complaint does, is to ignore the entire purpose of defining the relevant antitrust market altogether.

Offline advertising vs. online advertising

The fact that offline and online advertising employ distinct processes does not consign them to economically distinct markets. Indeed, online advertising has manifestly drawn advertisers from offline markets, just as previous technological innovations drew advertisers from other pre-existing channels.

Moreover, there is evidence that, in some cases, offline and online advertising are substitute products. For example, economists Avi Goldfarb and Catherine Tucker demonstrate that display advertising pricing is sensitive to the availability of offline alternatives. They conclude:

We believe our studies refute the hypothesis that online and offline advertising markets operate independently and suggest a default position of substitution. Online and offline advertising markets appear to be closely related. That said, it is important not to draw any firm conclusions based on historical behavior.

Display ads vs. search ads

There is perhaps even more reason to doubt that online display advertising constitutes a distinct, economically relevant market from online search advertising.

Although casual and ill-informed claims are often made to the contrary, various forms of targeted online advertising are significant competitors of each other. Bo Xing and Zhanxi Lin report firms spread their marketing budgets across these different sources of online marketing, and “search engine optimizers”—firms that help websites to maximize the likelihood of a valuable “top-of-list” organic search placement—attract significant revenue. That is, all of these different channels vie against each other for consumer attention and offer advertisers the ability to target their advertising based on data gleaned from consumers’ interactions with their platforms.

Facebook built a business on par with Google’s thanks in large part to advertising, by taking advantage of users’ more extended engagement with the platform to assess relevance and by enabling richer, more engaged advertising than previously appeared on Google Search. It’s an entirely different model from search, but one that has turned Facebook into a competitive ad platform.

And the market continues to shift. Somewhere between 37-56% of product searches start on Amazon, according to one survey, and advertisers have noticed. This is not surprising, given Amazon’s strong ability to match consumers with advertisements, and to do so when and where consumers are more likely to make a purchase.

‘Open’ display advertising vs. ‘owned-and-operated’ display advertising

The United Kingdom’s Competition and Markets Authority (like the Omidyar Roadmap report) has identified two distinct channels of display advertising, which they term “owned and operated” and “open.” The CMA concludes:

Over half of display expenditure is generated by Facebook, which owns both the Facebook platform and Instagram. YouTube has the second highest share of display advertising and is owned by Google. The open display market, in which advertisers buy inventory from many publishers of smaller scale (for example, newspapers and app providers) comprises around 32% of display expenditure.

The Texas complaint does not directly address the distinction between open and owned and operated, but it does allege anticompetitive conduct by Google with respect to YouTube in a separate “inline video advertising market.” 

The CMA finds that the owned-and-operated channel mostly comprises large social media platforms, which sell their own advertising inventory directly to advertisers or media agencies through self-service interfaces, such as Facebook Ads Manager or Snapchat Ads Manager.  In contrast, in the open display channel, publishers such as online newspapers and blogs sell their inventory to advertisers through a “complex chain of intermediaries.”  Through these, intermediaries run auctions that match advertisers’ ads to publisher inventory of ad space. In both channels, nearly all transactions are run through programmatic technology.

The CMA concludes that advertisers “largely see” the open and the owned-and-operated channels as substitutes. According to the CMA, an advertiser’s choice of one channel over the other is driven by each channel’s ability to meet the key performance metrics the advertising campaign is intended to achieve.

The Omidyar Roadmap argues, instead, that the CMA too narrowly focuses on the perspective of advertisers. The Roadmap authors claim that “most publishers” do not control supply that is “owned and operated.” As a result, they conclude that publishers “such as gardenandgun.com or hotels.com” do not have any owned-and-operated supply and can generate revenues from their supply “only through the Google-dominated adtech stack.” 

But this is simply not true. For example, in addition to inventory in its print media, Garden & Gun’s “Digital Media Kit” indicates that the publisher has several sources of owned-and-operated banner and video supply, including the desktop, mobile, and tablet ads on its website; a “homepage takeover” of its website; branded/sponsored content; its email newsletters; and its social media accounts. Hotels.com, an operating company of Expedia Group, has its own owned-and-operated search inventory, which it sells through its “Travel Ads Sponsored Listing,” as well owned-and-operated supply of standard and custom display ads.

Given that both perform the same function and employ similar mechanisms for matching inventory with advertisers, it is unsurprising that both advertisers and publishers appear to consider the owned-and-operated channel and the open channel to be substitutes.

[The following post was adapted from the International Center for Law & Economics White Paper “Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?]

Words can wound. They can humiliate, anger, insult.

University students—or, at least, a vociferous minority of them—are keen to prevent this injury by suppressing offensive speech. To ensure campuses are safe places, they militate for the cancellation of talks by speakers with opinions they find offensive, often successfully. And they campaign to get offensive professors fired from their jobs.

Off campus, some want this safety to be extended to the online world and, especially, to the users of social media platforms such as Twitter and Facebook. In the United States, this would mean weakening the legal protections of offensive speech provided by Section 230 of the Communications Decency Act (as President Joe Biden has recommended) or by the First Amendment and. In the United Kingdom, the Online Safety Bill is now before Parliament. If passed, it will give a U.K. government agency the power to dictate the content-moderation policies of social media platforms.

You don’t need to be a woke university student or grandstanding politician to suspect that society suffers from an overproduction of offensive speech. Basic economics provides a reason to suspect it—the reason being that offense is an external cost of speech. The cost is borne not by the speaker but by his audience. And when people do not bear all the costs of an action, they do it too much.

Jack tweets “women don’t have penises.” This offends Jill, who is someone with a penis who considers herself (or himself, if Jack is right) to be a woman. And it offends many others, who agree with Jill that Jack is indulging in ugly transphobic biological essentialism. Lacking Bill Clinton’s facility for feeling the pain of others, Jack does not bear this cost. So, even if it exceeds whatever benefit Jack gets from saying that women don’t have penises, he will still say it. In other words, he will say it even when doing so makes society altogether worse off.

It shouldn’t be allowed!

That’s what we normally say when actions harm others more than they benefit the agent. The law normally conforms to John Stuart Mill’s “Harm Principle” by restricting activities—such as shooting people or treating your neighbours to death metal at 130 decibels at 2 a.m.—with material external costs. Those who seek legal reform to restrict offensive speech are surely doing no more than following an accepted general principle.

But it’s not so simple. As Ronald Coase pointed out in his famous 1960 article “The Problem of Social Cost,” externalities are a reciprocal problem. If Wayne had no neighbors, his playing death metal at 130 decibels at 2 a.m. would have no external costs. Their choice of address is equally a source of the problem. Similarly, if Jill weren’t a Twitter user, she wouldn’t have been offended by Jack’s tweet about who has a penis, since she wouldn’t have encountered it. Externalities are like tangos: they always have at least two perpetrators.

So, the legal question, “who should have a right to what they want?”—Wayne to his loud music or his neighbors to their sleep; Jack to expressing his opinion about women or Jill to not hearing such opinions—cannot be answered by identifying the party who is responsible for the external cost. Both parties are responsible.

How, then, should the question be answered? In the same paper, Coase the showed that, in certain circumstances, who the courts favor will make no difference to what ends up happening, and that what ends up happening will be efficient. Suppose the court says that Wayne cannot bother his neighbors with death metal at 2 a.m. If Wayne would be willing to pay $100,000 to keep doing it and his neighbors, combined, would put up with it for anything more than $95,000, then they should be able to arrive at a mutually beneficial deal whereby Wayne pays them something between $95,000 and $100,000 to forgo their right to stop him making his dreadful noise.

That’s not exactly right. If negotiating a deal would cost more than $5,000, then no mutually beneficial deal is possible and the rights-trading won’t happen. Transaction costs being less than the difference between the two parties’ valuations is the circumstance in which the allocation of legal rights makes no difference to how resources get used, and where efficiency will be achieved, in any event.

But it is an unusual circumstance, especially when the external cost is suffered by many people. When the transaction cost is too high, efficiency does depend on the allocation of rights by courts or legislatures. As Coase argued, when this is so, efficiency will be served if a right to the disputed resource is granted to the party with the higher cost of avoiding the externality.

Given the (implausible) valuations Wayne and his neighbors place on the amount of noise in their environment at 2 a.m., efficiency is served by giving Wayne the right to play his death metal, unless he could soundproof his house or play his music at a much lower volume or take some other avoidance measure that costs him less than the $90,000 cost to his neighbours.

And given that Jack’s tweet about penises offends a large open-ended group of people, with whom Jack therefore cannot negotiate, it looks like they should be given the right not to be offended by Jack’s comment and he should be denied the right to make it. Coasean logic supports the woke censors!          

But, again, it’s not that simple—for two reasons.

The first is that, although those are offended may be harmed by the offending speech, they needn’t necessarily be. Physical pain is usually harmful, but not when experienced by a sexual masochist (in the right circumstances, of course). Similarly, many people take masochistic pleasure in being offended. You can tell they do, because they actively seek out the sources of their suffering. They are genuinely offended, but the offense isn’t harming them, just as the sexual masochist really is in physical pain but isn’t harmed by it. Indeed, real pain and real offense are required, respectively, for the satisfaction of the sexual masochist and the offense masochist.

How many of the offended are offense masochists? Where the offensive speech can be avoided at minimal cost, the answer must be most. Why follow Jordan Peterson on Twitter when you find his opinions offensive unless you enjoy being offended by him? Maybe some are keeping tabs on the dreadful man so that they can better resist him, and they take the pain for that reason rather than for masochistic glee. But how could a legislator or judge know? For all they know, most of those offended by Jordan Peterson are offense masochists and the offense he causes is a positive externality.

The second reason Coasean logic doesn’t support the would-be censors is that social media platforms—the venues of offensive speech that they seek to regulate—are privately owned. To see why this is significant, consider not offensive speech, but an offensive action, such as openly masturbating on a bus.

This is prohibited by law. But it is not the mere act that is illegal. You are allowed to masturbate in the privacy of your bedroom. You may not masturbate on a bus because those who are offended by the sight of it cannot easily avoid it. That’s why it is illegal to express obscenities about Jesus on a billboard erected across the road from a church but not at a meeting of the Angry Atheists Society. The laws that prohibit offensive speech in such circumstances—laws against public nuisance, harassment, public indecency, etc.—are generally efficient. The cost they impose on the offenders is less than the benefits to the offended.

But they are unnecessary when the giving and taking of offense occur within a privately owned place. Suppose no law prohibited masturbating on a bus. It still wouldn’t be allowed on buses owned by a profit-seeker. Few people want to masturbate on buses and most people who ride on buses seek trips that are masturbation-free. A prohibition on masturbation will gain the owner more customers than it loses him. The prohibition is simply another feature of the product offered by the bus company. Nice leather seats, punctual departures, and no wankers (literally). There is no more reason to believe that the bus company’s passenger-conduct rules will be inefficient than that its other product features will be and, therefore, no more reason to legally stipulate them.

The same goes for the content-moderation policies of social media platforms. They are just another product feature offered by a profit-seeking firm. If they repel more customers than they attract (or, more accurately, if they repel more advertising revenue than they attract), they would be inefficient. But then, of course, the company would not adopt them.

Of course, the owner of a social media platform might not be a pure profit-maximiser. For example, he might forgo $10 million in advertising revenue for the sake of banning speakers he personally finds offensive. But the outcome is still efficient. Allowing the speech would have cost more by way of the owner’s unhappiness than the lost advertising would have been worth.  And such powerful feelings in the owner of a platform create an opportunity for competitors who do not share his feelings. They can offer a platform that does not ban the offensive speakers and, if enough people want to hear what they have to say, attract users and the advertising revenue that comes with them. 

If efficiency is your concern, there is no problem for the authorities to solve. Indeed, the idea that the authorities would do a better job of deciding content-moderation rules is not merely absurd, but alarming. Politicians and the bureaucrats who answer to them or are appointed by them would use the power not to promote efficiency, but to promote agendas congenial to them. Jurisprudence in liberal democracies—and, especially, in America—has been suspicious of governmental control of what may be said. Nothing about social media provides good reason to become any less suspicious.

The recent launch of the international Multilateral Pharmaceutical Merger Task Force (MPMTF) is just the latest example of burgeoning cooperative efforts by leading competition agencies to promote convergence in antitrust enforcement. (See my recent paper on the globalization of antitrust, which assesses multinational cooperation and convergence initiatives in greater detail.) In what is a first, the U.S. Federal Trade Commission (FTC), the U.S. Justice Department’s (DOJ) Antitrust Division, offices of state Attorneys General, the European Commission’s Competition Directorate, Canada’s Competition Bureau, and the U.K.’s Competition and Market Authority (CMA) jointly created the MPMTF in March 2021 “to update their approach to analyzing the effects of pharmaceutical mergers.”

To help inform its analysis, in May 2021 the MPMTF requested public comments concerning the effects of pharmaceutical mergers. The MPMTF sought submissions regarding (among other issues) seven sets of questions:   

  1. What theories of harm should enforcement agencies consider when evaluating pharmaceutical mergers, including theories of harm beyond those currently considered?
  2. What is the full range of a pharmaceutical merger’s effects on innovation? What challenges arise when mergers involve proprietary drug discovery and manufacturing platforms?
  3. In pharmaceutical merger review, how should we consider the risks or effects of conduct such as price-setting practices, reverse payments, and other ways in which pharmaceutical companies respond to or rely on regulatory processes?
  4. How should we approach market definition in pharmaceutical mergers, and how is that implicated by new or evolving theories of harm?
  5. What evidence may be relevant or necessary to assess and, if applicable, challenge a pharmaceutical merger based on any new or expanded theories of harm?
  6. What types of remedies would work in the cases to which those theories are applied?
  7. What factors, such as the scope of assets and characteristics of divestiture buyers, influence the likelihood and success of pharmaceutical divestitures to resolve competitive concerns?

My research assistant Andrew Mercado and I recently submitted comments for the record addressing the questions posed by the MPMTF. We concluded:

Federal merger enforcement in general and FTC pharmaceutical merger enforcement in particular have been effective in promoting competition and consumer welfare. Proposed statutory amendments to strengthen merger enforcement not only are unnecessary, but also would, if enacted, tend to undermine welfare and would thus be poor public policy. A brief analysis of seven questions propounded by the Multilateral Pharmaceutical Merger Task Force suggests that: (a) significant changes in enforcement policies are not warranted; and (b) investigators should employ sound law and economics analysis, taking full account of merger-related efficiencies, when evaluating pharmaceutical mergers. 

While we leave it to interested readers to review our specific comments, this commentary highlights one key issue which we stressed—the importance of giving due weight to efficiencies (and, in particular, dynamic efficiencies) in evaluating pharma mergers. We also note an important critique by FTC Commissioner Christine Wilson of the treatment accorded merger-related efficiencies by U.S. antitrust enforcers.   

Discussion

Innovation in pharmaceuticals and vaccines has immensely significant economic and social consequences, as demonstrated most recently in the handling of the COVID-19 pandemic. As such, it is particularly important that public policy not stand in the way of realizing efficiencies that promote innovation in these markets. This observation applies directly, of course, to pharmaceutical antitrust enforcement, in general, and to pharma merger enforcement, in particular.

Regrettably, however, though general merger-enforcement policy has been generally sound, it has somewhat undervalued merger-related efficiencies.

Although U.S. antitrust enforcers give lip service to their serious consideration of efficiencies in merger reviews, the reality appears to be quite different, as documented by Commissioner Wilson in a 2020 speech.

Wilson’s General Merger-Efficiencies Critique: According to Wilson, the combination of finding narrow markets and refusing to weigh out-of-market efficiencies has created major “legal and evidentiary hurdles a defendant must clear when seeking to prove offsetting procompetitive efficiencies.” What’s more, the “courts [have] largely continue[d] to follow the Agencies’ lead in minimizing the importance of efficiencies.” Wilson shows that “the Horizontal Merger Guidelines text and case law appear to set different standards for demonstrating harms and efficiencies,” and argues that this “asymmetric approach has the obvious potential consequence of preventing some procompetitive mergers that increase consumer welfare.” Wilson concludes on a more positive note that this problem can be addressed by having enforcers: (1) treat harms and efficiencies symmetrically; and (2) establish clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.

While our filing with the MPMTF did not discuss Wilson’s general treatment of merger efficiencies, one would hope that the task force will appropriately weigh it in its deliberations. Our filing instead briefly addressed two “informational efficiencies” that may arise in the context of pharmaceutical mergers. These include:

More Efficient Resource Reallocation: The theory of the firm teaches that mergers may be motivated by the underutilization or misallocation of assets, or the opportunity to create welfare-enhancing synergies. In the pharmaceutical industry, these synergies may come from joining complementary research and development programs, combining diverse and specialized expertise that may be leveraged for better, faster drug development and more innovation.

Enhanced R&D: Currently, much of the R&D for large pharmaceutical companies is achieved through partnerships or investment in small biotechnology and research firms specializing in a single type of therapy. Whereas large pharmaceutical companies have expertise in marketing, navigating regulation, and undertaking trials of new drugs, small, research-focused firms can achieve greater advancements in medicine with smaller budgets. Furthermore, changes within firms brought about by a merger may increase innovation.

With increases in intellectual property and proprietary data that come from the merging of two companies, smaller research firms that work with the merged entity may have access to greater pools of information, enhancing the potential for innovation without increasing spending. This change not only raises the efficiency of the research being conducted in these small firms, but also increases the probability of a breakthrough without an increase in risk.

Conclusion

U.S. pharmaceutical merger enforcement has been fairly effective in forestalling anticompetitive combinations while allowing consumer welfare-enhancing transactions to go forward. Policy in this area should remain generally the same. Enforcers should continue to base enforcement decisions on sound economic theory fully supported by case-specific facts. Enforcement agencies could benefit, however, by placing a greater emphasis on efficiencies analysis. In particular, they should treat harms and efficiencies symmetrically (as recommend by Commissioner Wilson), and fully take into account likely resource reallocation and innovation-related efficiencies. 

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.