The Federal Trade Commission (FTC) wants to review in advance all future acquisitions by Facebook parent Meta Platforms. According to a Sept. 2 Bloomberg report, in connection with its challenge to Meta’s acquisition of fitness-app maker Within Unlimited, the commission “has asked its in-house court to force both Meta and [Meta CEO Mark] Zuckerberg to seek approval from the FTC before engaging in any future deals.”
This latest FTC decision is inherently hyper-regulatory, anti-free market, and contrary to the rule of law. It also is profoundly anti-consumer.
Like other large digital-platform companies, Meta has conferred enormous benefits on consumers (net of payments to platforms) that are not reflected in gross domestic product statistics. In a December 2019 Harvard Business Review article, Erik Brynjolfsson and Avinash Collis reported research finding that Facebook:
…generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. … [I]ncluding the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017.
The acquisition of complementary digital assets—like the popular fitness app produced by Within—enables Meta to continually enhance the quality of its offerings to consumers and thereby expand consumer surplus. It reflects the benefits of economic specialization, as specialized assets are made available to enhance the quality of Meta’s offerings. Requiring Meta to develop complementary assets in-house, when that is less efficient than a targeted acquisition, denies these benefits.
Furthermore, in a recent editorial lambasting the FTC’s challenge to a Meta-Within merger as lacking a principled basis, the Wall Street Journal pointed out that the challenge also removes incentive for venture-capital investments in promising startups, a result at odds with free markets and innovation:
Venture capitalists often fund startups on the hope that they will be bought by larger companies. [FTC Chair Lina] Khan is setting down the marker that the FTC can block acquisitions merely to prevent big companies from getting bigger, even if they don’t reduce competition or harm consumers. This will chill investment and innovation, and it deserves a burial in court.
This is bad enough. But the commission’s proposal to require blanket preapprovals of all future Meta mergers (including tiny acquisitions well under regulatory pre-merger reporting thresholds) greatly compounds the harm from its latest ill-advised merger challenge. Indeed, it poses a blatant challenge to free-market principles and the rule of law, in at least three ways.
It substitutes heavy-handed ex ante regulatory approval for a reliance on competition, with antitrust stepping in only in those limited instances where the hard facts indicate a transaction will be anticompetitive. Indeed, in one key sense, it is worse than traditional economic regulation. Empowering FTC staff to carry out case-by-case reviews of all proposed acquisitions inevitably will generate arbitrary decision-making, perhaps based on a variety of factors unrelated to traditional consumer-welfare-based antitrust. FTC leadership has abandoned sole reliance on consumer welfare as the touchstone of antitrust analysis, paving the wave for potentially abusive and arbitrary enforcement decisions. By contrast, statutorily based economic regulation, whatever its flaws, at least imposes specific standards that staff must apply when rendering regulatory determinations.
By abandoning sole reliance on consumer-welfare analysis, FTC reviews of proposed Meta acquisitions may be expected to undermine the major welfare benefits that Meta has previously bestowed upon consumers. Given the untrammeled nature of these reviews, Meta may be expected to be more cautious in proposing transactions that could enhance consumer offerings. What’s more, the general anti-merger bias by current FTC leadership would undoubtedly prompt them to reject some, if not many, procompetitive transactions that would confer new benefits on consumers.
Instituting a system of case-by-case assessment and approval of transactions is antithetical to the normal American reliance on free markets, featuring limited government intervention in market transactions based on specific statutory guidance. The proposed review system for Meta lacks statutory warrant and (as noted above) could promote arbitrary decision-making. As such, it seriously flouts the rule of law and threatens substantial economic harm (sadly consistent with other ill-considered initiatives by FTC Chair Khan, see here and here).
In sum, internet-based industries, and the big digital platforms, have thrived under a system of American technological freedom characterized as “permissionless innovation.” Under this system, the American people—consumers and producers—have been the winners.
The FTC’s efforts to micromanage future business decision-making by Meta, prompted by the challenge to a routine merger, would seriously harm welfare. To the extent that the FTC views such novel interventionism as a bureaucratic template applicable to other disfavored large companies, the American public would be the big-time loser.
The wave of populist antitrust that has been embraced by regulators and legislators in the United States, United Kingdom, European Union, and other jurisdictions rests on the assumption that currently dominant platforms occupy entrenched positions that only government intervention can dislodge. Following this view, Facebook will forever dominate social networking, Amazon will forever dominate cloud computing, Uber and Lyft will forever dominate ridesharing, and Amazon and Netflix will forever dominate streaming. This assumption of platform invincibility is so well-established that some policymakers advocate significant interventions without making any meaningful inquiry into whether a seemingly dominant platform actually exercises market power.
Yet this assumption is not supported by historical patterns in platform markets. It is true that network effects drive platform markets toward “winner-take-most” outcomes. But the winner is often toppled quickly and without much warning. There is no shortage of examples.
In 2007, a columnist in The Guardian observed that “it may already be too late for competitors to dislodge MySpace” and quoted an economist as authority for the proposition that “MySpace is well on the way to becoming … a natural monopoly.” About one year later, Facebook had overtaken MySpace “monopoly” in the social-networking market. Similarly, it was once thought that Blackberry would forever dominate the mobile-communications device market, eBay would always dominate the online e-commerce market, and AOL would always dominate the internet-service-portal market (a market that no longer even exists). The list of digital dinosaurs could go on.
All those tech leaders were challenged by entrants and descended into irrelevance (or reduced relevance, in eBay’s case). This occurred through the force of competition, not government intervention.
Why This Time is Probably Not Different
Given this long line of market precedents, current legislative and regulatory efforts to “restore” competition through extensive intervention in digital-platform markets require that we assume that “this time is different.” Just as that slogan has been repeatedly rebutted in the financial markets, so too is it likely to be rebutted in platform markets.
There is already supporting evidence.
In the cloud market, Amazon’s AWS now faces vigorous competition from Microsoft Azure and Google Cloud. In the streaming market, Amazon and Netflix face stiff competition from Disney+ and Apple TV+, just to name a few well-resourced rivals. In the social-networking market, Facebook now competes head-to-head with TikTok and seems to be losing. The market power once commonly attributed to leading food-delivery platforms such as Grubhub, UberEats, and DoorDash is implausible after persistent losses in most cases, and the continuous entry of new services into a rich variety of local and product-market niches.
Those who have advocated antitrust intervention on a fast-track schedule may remain unconvinced by these inconvenient facts. But the market is not.
Investors have already recognized Netflix’s vulnerability to competition, as reflected by a 35% fall in its stock price on April 20 and a decline of more than 60% over the past 12 months. Meta, Facebook’s parent, also experienced a reappraisal, falling more than 26% on Feb. 3 and more than 35% in the past 12 months. Uber, the pioneer of the ridesharing market, has declined by almost 50% over the past 12 months, while Lyft, its principal rival, has lost more than 60% of its value. These price freefalls suggest that antitrust populists may be pursuing solutions to a problem that market forces are already starting to address.
The Forgotten Curse of the Incumbent
For some commentators, the sharp downturn in the fortunes of the so-called “Big Tech” firms would not come as a surprise.
It has long been observed by some scholars and courts that a dominant firm “carries the seeds of its own destruction”—a phrase used by then-professor and later-Judge Richard Posner, writing in the University of Chicago Law Review in 1971. The reason: a dominant firm is liable to exhibit high prices, mediocre quality, or lackluster innovation, which then invites entry by more adept challengers. However, this view has been dismissed as outdated in digital-platform markets, where incumbents are purportedly protected by network effects and switching costs that make it difficult for entrants to attract users. Depending on the set of assumptions selected by an economic modeler, each contingency is equally plausible in theory.
The plunging values of leading platforms supplies real-world evidence that favors the self-correction hypothesis. It is often overlooked that network effects can work in both directions, resulting in a precipitous fall from market leader to laggard. Once users start abandoning a dominant platform for a new competitor, network effects operating in reverse can cause a “run for the exits” that leaves the leader with little time to recover. Just ask Nokia, the world’s leading (and seemingly unbeatable) smartphone brand until the Apple iPhone came along.
Market self-correction inherently outperforms regulatory correction: it operates far more rapidly and relies on consumer preferences to reallocate market leadership—a result perfectly consistent with antitrust’s mission to preserve “competition on the merits.” In contrast, policymakers can misdiagnose the competitive effects of business practices; are susceptible to the influence of private interests (especially those that are unable to compete on the merits); and often mispredict the market’s future trajectory. For Exhibit A, see the protracted antitrust litigation by the U.S. Department against IBM, which started in 1975 and ended in withdrawal of the suit in 1982. Given the launch of the Apple II in 1977, the IBM PC in 1981, and the entry of multiple “PC clones,” the forces of creative destruction swiftly displaced IBM from market leadership in the computing industry.
Regulators and legislators around the world have emphasized the urgency of taking dramatic action to correct claimed market failures in digital environments, casting aside prudential concerns over the consequences if any such failure proves to be illusory or temporary.
But the costs of regulatory failure can be significant and long-lasting. Markets must operate under unnecessary compliance burdens that are difficult to modify. Regulators’ enforcement resources are diverted, and businesses are barred from adopting practices that would benefit consumers. In particular, proposed breakup remedies advocated by some policymakers would undermine the scale economies that have enabled platforms to push down prices, an important consideration in a time of accelerating inflation.
The high concentration levels and certain business practices in digital-platform markets certainly raise important concerns as a matter of antitrust (as well as privacy, intellectual property, and other bodies of) law. These concerns merit scrutiny and may necessitate appropriately targeted interventions. Yet, any policy steps should be anchored in the factually grounded analysis that has characterized decades of regulatory and judicial action to implement the antitrust laws with appropriate care. Abandoning this nuanced framework for a blunt approach based on reflexive assumptions of market power is likely to undermine, rather than promote, the public interest in competitive markets.
Sens. Amy Klobuchar (D-Minn.) and Chuck Grassley (R-Iowa)—cosponsors of the American Innovation Online and Choice Act, which seeks to “rein in” tech companies like Apple, Google, Meta, and Amazon—contend that “everyone acknowledges the problems posed by dominant online platforms.”
In their framing, it is simply an acknowledged fact that U.S. antitrust law has not kept pace with developments in the digital sector, allowing a handful of Big Tech firms to exploit consumers and foreclose competitors from the market. To address the issue, the senators’ bill would bar “covered platforms” from engaging in a raft of conduct, including self-preferencing, tying, and limiting interoperability with competitors’ products.
That’s what makes the open letter to Congress published late last month by the usually staid American Bar Association’s (ABA) Antitrust Law Section so eye-opening. The letter is nothing short of a searing critique of the legislation, which the section finds to be poorly written, vague, and departing from established antitrust-law principles.
The ABA, of course, has a reputation as an independent, highly professional, and heterogenous group. The antitrust section’s membership includes not only in-house corporate counsel, but lawyers from nonprofits, consulting firms, federal and state agencies, judges, and legal academics. Given this context, the comments must be read as a high-level judgment that recent legislative and regulatory efforts to “discipline” tech fall outside the legal mainstream and would come at the cost of established antitrust principles, legal precedent, transparency, sound economic analysis, and ultimately consumer welfare.
The Antitrust Section’s Comments
As the ABA Antitrust Law Section observes:
The Section has long supported the evolution of antitrust law to keep pace with evolving circumstances, economic theory, and empirical evidence. Here, however, the Section is concerned that the Bill, as written, departs in some respects from accepted principles of competition law and in so doing risks causing unpredicted and unintended consequences.
Broadly speaking, the section’s criticisms fall into two interrelated categories. The first relates to deviations from antitrust orthodoxy and the principles that guide enforcement. The second is a critique of the AICOA’s overly broad language and ambiguous terminology.
Departing from established antitrust-law principles
Substantively, the overarching concern expressed by the ABA Antitrust Law Section is that AICOA departs from the traditional role of antitrust law, which is to protect the competitive process, rather than choosing to favor some competitors at the expense of others. Indeed, the section’s open letter observes that, out of the 10 categories of prohibited conduct spelled out in the legislation, only three require a “material harm to competition.”
Take, for instance, the prohibition on “discriminatory” conduct. As it stands, the bill’s language does not require a showing of harm to the competitive process. It instead appears to enshrine a freestanding prohibition of discrimination. The bill targets tying practices that are already prohibited by U.S. antitrust law, but while similarly eschewing the traditional required showings of market power and harm to the competitive process. The same can be said, mutatis mutandis, for “self-preferencing” and the “unfair” treatment of competitors.
The problem, the section’s letter to Congress argues, is not only that this increases the teleological chasm between AICOA and the overarching goals and principles of antitrust law, but that it can also easily lead to harmful unintended consequences. For instance, as the ABA Antitrust Law Section previously observed in comments to the Australian Competition and Consumer Commission, a prohibition of pricing discrimination can limit the extent of discounting generally. Similarly, self-preferencing conduct on a platform can be welfare-enhancing, while forced interoperability—which is also contemplated by AICOA—can increase prices for consumers and dampen incentives to innovate. Furthermore, some of these blanket prohibitions are arguably at loggerheads with established antitrust doctrine, such as in, e.g., Trinko, which established that even monopolists are generally free to decide with whom they will deal.
Arguably, the reason why the Klobuchar-Grassley bill can so seamlessly exclude or redraw such a central element of antitrust law as competitive harm is because it deliberately chooses to ignore another, preceding one. Namely, the bill omits market power as a requirement for a finding of infringement or for the legislation’s equally crucial designation as a “covered platform.” It instead prescribes size metrics—number of users, market capitalization—to define which platforms are subject to intervention. Such definitions cast an overly wide net that can potentially capture consumer-facing conduct that doesn’t have the potential to harm competition at all.
It is precisely for this reason that existing antitrust laws are tethered to market power—i.e., because it long has been recognized that only companies with market power can harm competition. As John B. Kirkwood of Seattle University School of Law has written:
Market power’s pivotal role is clear…This concept is central to antitrust because it distinguishes firms that can harm competition and consumers from those that cannot.
In response to the above, the ABA Antitrust Law Section (reasonably) urges Congress explicitly to require an effects-based showing of harm to the competitive process as a prerequisite for all 10 of the infringements contemplated in the AICOA. This also means disclaiming generalized prohibitions of “discrimination” and of “unfairness” and replacing blanket prohibitions (such as the one for self-preferencing) with measured case-by-case analysis.
Opaque language for opaque ideas
Another underlying issue is that the Klobuchar-Grassley bill is shot through with indeterminate language and fuzzy concepts that have no clear limiting principles. For instance, in order either to establish liability or to mount a successful defense to an alleged violation, the bill relies heavily on inherently amorphous terms such as “fairness,” “preferencing,” and “materiality,” or the “intrinsic” value of a product. But as the ABA Antitrust Law Section letter rightly observes, these concepts are not defined in the bill, nor by existing antitrust case law. As such, they inject variability and indeterminacy into how the legislation would be administered.
Moreover, it is also unclear how some incommensurable concepts will be weighed against each other. For example, how would concerns about safety and security be weighed against prohibitions on self-preferencing or requirements for interoperability? What is a “core function” and when would the law determine it has been sufficiently “enhanced” or “maintained”—requirements the law sets out to exempt certain otherwise prohibited behavior? The lack of linguistic and conceptual clarity not only explodes legal certainty, but also invites judicial second-guessing into the operation of business decisions, something against which the U.S. Supreme Court has long warned.
Finally, the bill’s choice of language and recent amendments to its terminology seem to confirm the dynamic discussed in the previous section. Most notably, the latest version of AICOA replaces earlier language invoking “harm to the competitive process” with “material harm to competition.” As the ABA Antitrust Law Section observes, this “suggests a shift away from protecting the competitive process towards protecting individual competitors.” Indeed, “material harm to competition” deviates from established categories such as “undue restraint of trade” or “substantial lessening of competition,” which have a clear focus on the competitive process. As a result, it is not unreasonable to expect that the new terminology might be interpreted as meaning that the actionable standard is material harm to competitors.
In its letter, the antitrust section urges Congress not only to define more clearly the novel terminology used in the bill, but also to do so in a manner consistent with existing antitrust law. Indeed:
The Section further recommends that these definitions direct attention to analysis consistent with antitrust principles: effects-based inquiries concerned with harm to the competitive process, not merely harm to particular competitors
The AICOA is a poorly written, misguided, and rushed piece of regulation that contravenes both basic antitrust-law principles and mainstream economic insights in the pursuit of a pre-established populist political goal: punishing the success of tech companies. If left uncorrected by Congress, these mistakes could have potentially far-reaching consequences for innovation in digital markets and for consumer welfare. They could also set antitrust law on a regressive course back toward a policy of picking winners and losers.
The following post was authored by counsel with White & Case LLP, who represented the International Center for Law & Economics (ICLE) in an amicus brief filed on behalf of itself and 12 distinguished law & economics scholars with the U.S. Court of Appeals for the D.C. Circuit in support of affirming U.S. District Court Judge James Boasberg’s dismissal of various States Attorneys General’s antitrust case brought against Facebook (now, Meta Platforms).
The States brought an antitrust complaint against Facebook alleging that various conduct violated Section 2 of the Sherman Act. The ICLE brief addresses the States’ allegations that Facebook refused to provide access to an input, a set of application-programming interfaces that developers use in order to access Facebook’s network of social-media users (Facebook’s Platform), in order to prevent those third parties from using that access to export Facebook data to competitors or to compete directly with Facebook.
Judge Boasberg dismissed the States’ case without leave to amend, relying on recent Supreme Court precedent, including TrinkoandLinkline, on refusals to deal. The Supreme Court strongly disfavors forced sharing, as shown by its decisions that recognize very few exceptions to the ability of firms to deal with whom they choose. Most notably, Aspen Skiing Co. v. Aspen Highlands Skiing is a 1985 decision recognizing an exception to the general rule that firms may deal with whom they want that was limited, though not expressly overturned, by Trinko in 2004. The States appealed to the D.C. Circuit on several grounds, including by relying on Aspen Skiing, and advocating for a broader view of refusals to deal than dictated by current jurisprudence.
ICLE’s brief addresses whether the District Court was correct to dismiss the States’ allegations that Facebook’s Platform policies violated Section 2 of the Sherman Act in light of the voluminous body of precedent and scholarship concerning refusals to deal. ICLE’s brief argues that Judge Boasberg’s opinion is consistent with economic and legal principles, allowing firms to choose with whom they deal. Furthermore, the States’ allegations did not make out a claim under Aspen Skiing, which sets forth extremely narrow circumstances that may constitute an improper refusal to deal. Finally, ICLE takes issue with the States’ attempt to create an amorphous legal standard for refusals to deal or otherwise shoehorn their allegations into a “conditional dealing” framework.
Economic Actors Should Be Able to Choose Their Business Partners
ICLE’s basic premise is that firms in a free-market system should be able to choose their business partners. Forcing firms to enter into certain business relationships can have the effect of stifling innovation, because the firm getting the benefit of the forced dealing then lacks incentive to create their own inputs. On the other side of the forced dealing, the owner would have reduced incentives to continue to innovate, invest, or create intellectual property. Forced dealing, therefore, has an adverse effect on the fundamental nature of competition. As the Supreme Court stated in Trinko, this compelled sharing creates “tension with the underlying purpose of antitrust law, since it may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.”
Courts Are Ill-Equipped to Regulate the Kind of Forced Sharing Advocated by the States
ICLE also notes the inherent difficulties of a court’s assessing forced access and the substantial risk of error that could create harm to competition. This risk, ICLE notes, is not merely theoretical and would require the court to scrutinize intricate details of a dynamic industry and determine which decisions are lawful or not. Take the facts of New York v. Facebook: more than 10 million apps and websites had access to Platform during the relevant period and the States took issue with only seven instances where Facebook had allegedly improperly prevented access to Platform. Assessing whether conduct would create efficiency in one circumstance versus another is challenging at best and always risky. As Frank Easterbook wrote: “Anyone who thinks that judges would be good at detecting the few situations in which cooperation would do more good than harm has not studied the history of antitrust.”
Even assuming a court has rightly identified a potentially anticompetitive refusal to deal, it would then be put to the task of remedying it. But imposing a remedy, and in effect assuming the role of a regulator, is similarly complicated. This is particularly true in dynamic, quickly evolving industries, such as social media. This concern is highlighted by the broad injunction the States seek in this case: to “enjoin and restrain [Facebook] from continuing to engage in any anticompetitive conduct and from adopting in the future any practice, plan, program, or device having a similar purpose or effect to the anticompetitive actions set forth above.” Such a remedy would impose conditions on Facebook’s dealings with competitors for years to come—regardless of how the industry evolves.
Courts Should Not Expand Refusal-to-Deal Analysis Beyond the Narrow Circumstances of Aspen Skiing
In light of the principles above, the Supreme Court, as stated in Trinko, “ha[s] been very cautious in recognizing [refusal-to-deal] exceptions, because of the uncertain virtue of forced sharing and the difficulty of identifying and remedying anticompetitive conduct by a single firm.” Various scholars (e.g., Carlton, Meese, Lopatka, Epstein) have analyzed Aspen Skiing consistently with Trinko as, at most, “at or near the boundary of § 2 liability.”
So is a refusal-to-deal claim ever viable? ICLE argues that refusal-to-deal claims have been rare (rightly so) and, at most, should only go forward under the delineated circumstances in Aspen Skiing. ICLE sets forth the 10th U.S. Circuit’s framework in Novell, which makes clear that “the monopolist’s conduct must be irrational but for its anticompetitive effect.”
First, “there must be a preexisting voluntary and presumably profitable course of dealing between the monopolist and rival.”
Second, “the monopolist’s discontinuation of the preexisting course of dealing must suggest a willingness to forsake short-term profits to achieve an anti-competitive end.”
Finally, even if these two factors are present, the court recognized that “firms routinely sacrifice short-term profits for lots of legitimate reasons that enhance consumer welfare.”
The States seek to broaden Aspen Skiing in order to sinisterize Facebook’s Platform policies, but the facts do not fit. The States do not plead an about-face with respect to Facebook’s Platform policies; the States do not allege that Facebook’s changes to its policies were irrational (particularly in light of the dynamic industry in which Facebook operates); and the States do not allege that Facebook engaged in less efficient behavior with the goal of hurting rivals. Indeed, Facebook changed its policies to retain users—which is essential to its business model (and therefore, rational).
The States try to evade these requirements by arguing for a looser refusal-to-deal standard (and by trying to shoehorn the conduct as “conditional dealing”)—but as ICLE explains, allowing such a claim to go forward would fly in the face of the economic and policy goals upheld by the current jurisprudence.
The District Court was correct to dismiss the States’ allegations concerning Facebook’s Platform policies. Allowing a claim against Facebook to progress under the circumstances alleged in the States’ complaint would violate the principle that a firm, even one that is a monopolist, should not be held liable for refusing to deal with a certain business partner. The District Court’s decision is in line with key economic principles concerning refusals to deal and consistent with the Supreme Court’s decision in Aspen Skiing. Aspen Skiing is properly read to severely limit the circumstances giving rise to a refusal-to-deal claim, or else risk adverse effects such as reduced incentive to innovate.
Amici Scholars Signing on to the Brief
(The ICLE brief presents the views of the individual signers listed below. Institutions are listed for identification purposes only.)
Henry Butler Henry G. Manne Chair in Law and Economics and Executive Director of the Law & Economics Center, Scalia Law School
Daniel Lyons Professor of Law, Boston College Law School
Richard A. Epstein Laurence A. Tisch Professor of Law at NY School of Law, the Peter and Kirsten Bedford Senior Lecturer at the Hoover Institution, and the James Parker Hall Distinguished Service Professor Emeritus
Geoffrey A. Manne President and Founder, International Center for Law & Economics, Distinguished Fellow Northwestern University Center on Law, Business & Economics
Thomas Hazlett H.H. Macaulay Endowed Professor of Economics and Director of the Information Economy Project, Clemson University
Alan J. Meese Ball Professor of Law, Co-Director, Center for the Study of Law and Markets, William & Mary Law School
Justin (Gus) Hurwitz Professor of Law and Menard Director of the Nebraska Governance and Technology Center, University of Nebraska College of Law
Paul H. Rubin Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
Jonathan Klick Charles A. Heimbold, Jr. Professor of Law, University of Pennsylvania Carey School of Law; Erasmus Chair of Empirical Legal Studies, Erasmus University Rotterdam
Michael Sykuta Associate Professor of Economics and Executive Director of Financial Research Institute, University of Missouri Division of Applied Social Sciences
Thomas A. Lambert Wall Chair in Corporate Law and Governance, University of Missouri Law School
John Yun Associate Professor of Law and Deputy Executive Director of the Global Antitrust Institute, Scalia Law School
A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.
It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:
How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?
Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).
When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.
As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.
The Shaky Foundations of Attention Markets Theory
Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.
First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).
There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:
This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”
Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:
But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.
The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.
The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.
None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.
First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.
This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.
This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.
Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.
This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.
Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:
An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.
In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.
In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.
The Bait and Switch: Qualitative Indicia
These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:
Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method.
Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.
This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”
This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.
A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences.
There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching.
The Way Forward
The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.
As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.
Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.
Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:
The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.
Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.
In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.
During the exceptional rise in stock-market valuations from March 2020 to January 2022, both equity investors and antitrust regulators have implicitly agreed that so-called “Big Tech” firms enjoyed unbeatable competitive advantages as gatekeepers with largely unmitigated power over the digital ecosystem.
Investors bid up the value of tech stocks to exceptional levels, anticipating no competitive threat to incumbent platforms. Antitrust enforcers and some legislators have exhibited belief in the same underlying assumption. In their case, it has spurred advocacy of dramatic remedies—including breaking up the Big Tech platforms—as necessary interventions to restore competition.
Other voices in the antitrust community have been more circumspect. A key reason is the theory of contestable markets, developed in the 1980s by the late William Baumol and other economists, which holds that even extremely large market shares are at best a potential indicator of market power. To illustrate, consider the extreme case of a market occupied by a single firm. Intuitively, the firm would appear to have unqualified pricing power. Not so fast, say contestable market theorists. Suppose entry costs into the market are low and consumers can easily move to other providers. This means that the apparent monopolist will act as if the market is populated by other competitors. The takeaway: market share alone cannot demonstrate market power without evidence of sufficiently strong barriers to market entry.
While regulators and some legislators have overlooked this inconvenient principle, it appears the market has not. To illustrate, look no further than the Feb. 3 $230 billion crash in the market value of Meta Platforms—parent company of Facebook, Instagram, and WhatsApp, among other services.
In its antitrust suit against Meta, the Federal Trade Commission (FTC) has argued that Meta’s Facebook service enjoys a social-networking monopoly, a contention that the judge in the case initially rejected in June 2021 as so lacking in factual support that the suit was provisionally dismissed. The judge’s ruling (which he withdrew last month, allowing the suit to go forward after the FTC submitted a revised complaint) has been portrayed as evidence for the view that existing antitrust law sets overly demanding evidentiary standards that unfairly shelter corporate defendants.
Yet, the record-setting single-day loss in Meta’s value suggests the evidentiary standard is set just about right and the judge’s skepticism was fully warranted. Consider one of the principal reasons behind Meta’s plunge in value: its service had suffered substantial losses of users to TikTok, a formidable rival in a social-networking market in which the FTC claims that Facebook faces no serious competition. The market begs to differ. In light of the obvious competitive threat posed by TikTok and other services, investors reassessed Facebook’s staying power, which was then reflected in its owner Meta’s downgraded stock price.
Just as the investment bubble that had supported the stock market’s case for Meta has popped, so too must the regulatory bubble that had supported the FTC’s antitrust case against it. Investors’ reevaluation rebuts the FTC’s strained market definition that had implausibly excluded TikTok as a competitor.
Even more fundamentally, the market’s assessment shows that Facebook’s users face nominal switching costs—in which case, its leadership position is contestable and the Facebook “monopoly” is not much of a monopoly. While this conclusion might seem surprising, Facebook’s vulnerability is hardly exceptional: Nokia, Blackberry, AOL, Yahoo, Netscape, and PalmPilot illustrate how often seemingly unbeatable tech leaders have been toppled with remarkable speed.
The unraveling of the FTC’s case against what would appear to be an obviously dominant platform should be a wake-up call for those policymakers who have embraced populist antitrust’s view that existing evidentiary requirements, which minimize the risk of “false positive” findings of anticompetitive conduct, should be set aside as an inconvenient obstacle to regulatory and judicial intervention.
None of this should be interpreted to deny that concentration levels in certain digital markets raise significant antitrust concerns that merit close scrutiny. In particular, regulators have overlooked how some leading platforms have devalued intellectual-property rights in a manner that distorts technology and content markets by advantaging firms that operate integrated product and service ecosystems while disadvantaging firms that specialize in supplying the technological and creative inputs on which those ecosystems rely.
The fundamental point is that potential risks to competition posed by any leading platform’s business practices can be assessed through rigorous fact-based application of the existing toolkit of antitrust analysis. This is critical to evaluate whether a given firm likely occupies a transitory, rather than durable, leadership position. The plunge in Meta’s stock in response to a revealed competitive threat illustrates the perils of discarding that surgical toolkit in favor of a blunt “big is bad” principle.
Contrary to what has become an increasingly common narrative in policy discussions and political commentary, the existing framework of antitrust analysis was not designed by scholars strategically acting to protect “big business.” Rather, this framework was designed and refined by scholars dedicated to rationalizing, through the rigorous application of economic principles, an incoherent body of case law that had often harmed consumers by shielding incumbents against threats posed by more efficient rivals. The legal shortcuts being pursued by antitrust populists to detour around appropriately demanding evidentiary requirements are writing a “back to the future” script that threatens to return antitrust law to that unfortunate predicament.
Antitrust policymakers around the world have taken a page out of the Silicon Valley playbook and decided to “move fast and break things.” While the slogan is certainly catchy, applying it to the policymaking world is unfortunate and, ultimately, threatens to harm consumers.
Several antitrust authorities in recent months have announced their intention to block (or, at least, challenge) a spate of mergers that, under normal circumstances, would warrant only limited scrutiny and face little prospect of outright prohibition. This is notably the case of several vertical mergers, as well as mergers between firms that are only potential competitors (sometimes framed as “killer acquisitions”). These include Facebook’s acquisition of Giphy (U.K.); Nvidia’s ARM Ltd. deal (U.S., EU, and U.K.), and Illumina’s purchase of GRAIL (EU). It is also the case for horizontal mergers in non-concentrated markets, such as WarnerMedia’s proposed merger with Discovery, which has faced significant political backlash.
Some of these deals fail even to implicate “traditional” merger-notification thresholds. Facebook’s purchase of Giphy was only notifiable because of the U.K. Competition and Markets Authority’s broad interpretation of its “share of supply test” (which eschews traditional revenue thresholds). Likewise, the European Commission relied on a highly controversial interpretation of the so-called “Article 22 referral” procedure in order to review Illumina’s GRAIL purchase.
Some have praised these interventions, claiming antitrust authorities should take their chances and prosecute high-profile deals. It certainly appears that authorities are pressing their luck because they face few penalties for wrongful prosecutions. Overly aggressive merger enforcement might even reinforce their bargaining position in subsequent cases. In other words, enforcers risk imposing social costs on firms and consumers because their incentives to prosecute mergers are not aligned with those of society as a whole.
None of this should come as a surprise to anyone who has been following this space. As my ICLE colleagues and I have been arguing for quite a while, weakening the guardrails that surround merger-review proceedings opens the door to arbitrary interventions that are difficult (though certainly not impossible) to remediate before courts.
A Simplified Model of Legal Disputes
The negotiations that surround merger-review proceedings involve firms and authorities bargaining in the shadow of potential litigation. Whether and which concessions are made will depend chiefly on what the parties believe will be the outcome of litigation. If firms think courts will safeguard their merger, they will offer authorities few potential remedies. Conversely, if authorities believe courts will support their decision to block a merger, they are unlikely to accept concessions that stop short of the parties withdrawing their deal.
This simplified model suggests that neither enforcers nor merging parties are in position to “exploit” the merger-review process, so long as courts review decisions effectively. Under this model, overly aggressive enforcement would merely lead to defeat in court (and, expecting this, merging parties would offer few concessions to authorities).
Put differently, court proceedings are both a dispute-resolution mechanism and a source of rulemaking. The result is that only marginal cases should lead to actual disputes. Most harmful mergers will be deterred, and clearly beneficial ones will be cleared rapidly. So long as courts apply the consumer welfare standard consistently, firms’ merger decisions—along with any rulings or remedies—all should primarily serve consumers’ interests.
At least, that is the theory. But there are factors that can serve to undermine this efficient outcome. In the field of merger control, this is notably the case with court delays that prevent parties from effectively challenging merger decisions.
While delays between when a legal claim is filed and a judgment is rendered aren’t always detrimental (as Richard Posner observes, speed can be costly), it is essential that these delays be accounted for in any subsequent damages and penalties. Parties that prevail in court might otherwise only obtain reparations that are below the market rate, reducing the incentive to seek judicial review in the first place.
The problem is particularly acute when it comes to merger reviews. Merger challenges might lead the parties to abandon a deal because they estimate the transaction will no longer be commercially viable by the time courts have decided the matter. This is a problem, insofar as neither U.S. nor EU antitrust law generally requires authorities to compensate parties for wrongful merger decisions. For example, courts in the EU have declined to fully compensate aggrieved companies (e.g., the CFI in Schneider) and have set an exceedingly high bar for such claims to succeed at all.
In short, parties have little incentive to challenge merger decisions if the only positive outcome is for their deals to be posthumously sanctified. This smaller incentive to litigate may be insufficient to create enough cases that would potentially helpful precedent for future merging firms. Ultimately, the balance of bargaining power is tilted in favor of competition authorities.
Some Data on Mergers
While not necessarily dispositive, there is qualitative evidence to suggest that parties often drop their deals when authorities either block them (as in the EU) or challenge them in court (in the United States).
U.S. merging parties nearly always either reach a settlement or scrap their deal when their merger is challenged. There were 43 transactions challenged by either the U.S. Justice Department (15) or the Federal Trade Commission (28) in 2020. Of these, 15 were abandoned and almost all the remaining cases led to settlements.
The EU picture is similar. The European Commission blocks, on average, about one merger every year (30 over the last 31 years). Most in-depth investigations are settled in exchange for remedies offered by the merging firms (141 out of 239). While the EU does not publish detailed statistics concerning abandoned mergers, it is rare for firms to appeal merger-prohibition decisions. The European Court of Justice’s database lists only six such appeals over a similar timespan. The vast majority of blocked mergers are scrapped, with the parties declining to appeal.
This proclivity to abandon mergers is surprising, given firms’ high success rate in court. Of the six merger-annulment appeals in the ECJ’s database (CK Hutchison Holdings Ltd.’s acquisition of Telefónica Europe Plc; Ryanair’s acquisition of a controlling stake in Aer Lingus; a proposed merger between Deutsche Börse and NYSE Euronext; Tetra Laval’s takeover of Sidel Group; a merger between Schneider Electric SA and Legrand SA; and Airtours’ acquisition of First Choice) merging firms won four of them. While precise numbers are harder to come by in the United States, it is also reportedly rare for U.S. antitrust enforcers to win merger-challenge cases.
One explanation is that only marginal cases ever make it to court. In other words, firms with weak cases are, all else being equal, less likely to litigate. However, that is unlikely to explain all abandoned deals.
There are documented cases in which it was clearly delays, rather than self-selection, that caused firms to scrap planned mergers. In the EU’s Airtours proceedings, the merging parties dropped their transaction even though they went on to prevail in court (and First Choice, the target firm, was acquired by another rival). This is inconsistent with the notion that proposed mergers are abandoned only when the parties have a weak case to challenge (the Commission’s decision was widely seen as controversial).
Antitrust policymakers also generally acknowledge that mergers are often time-sensitive. That’s why merger rules on both sides of the Atlantic tend to impose strict timelines within which antitrust authorities must review deals.
In the end, if self-selection based on case strength were the only criteria merging firms used in deciding to appeal a merger challenge, one would not expect an equilibrium in which firms prevail in more than two-thirds of cases. If firms anticipated that a successful court case would preserve a multi-billion dollar merger, the relatively small burden of legal fees should not dissuade them from litigating, even if their chance of success was tiny. We would expect to see more firms losing in court.
The upshot is that antitrust challenges and prohibition decisions likely cause at least some firms to abandon their deals because court proceedings are not seen as an effective remedy. This perception, in turn, reinforces authorities’ bargaining position and thus encourages firms to offer excessive remedies in hopes of staving off lengthy litigation.
A general rule of policymaking is that rules should seek to ensure that agents internalize both the positive and negative effects of their decisions. This, in turn, should ensure that they behave efficiently.
In the field of merger control, those incentives are misaligned. Given the prevailing political climate on both sides of the Atlantic, challenging large corporate acquisitions likely generates important political capital for antitrust authorities. But wrongful merger prohibitions are unlikely to elicit the kinds of judicial rebukes that would compel authorities to proceed more carefully.
Put differently, in the field of antitrust law, court proceedings ought to serve as a guardrail to ensure that enforcement decisions ultimately benefit consumers. When that shield is removed, it is no longer a given that authorities—who, in theory, act as agents of society—will act in the best interests of that society, rather than maximize their own preferences.
Ideally, we should ensure that antitrust authorities bear the social costs of faulty decisions, by compensating, at least, the direct victims of their actions (i.e., the merging firms). However, this would likely require new legislation to that effect, as there currently are too many obstacles to such cases. It is thus unlikely to represent a short-term solution.
In the meantime, regulatory restraint appears to be the only realistic solution. Or, one might say, authorities should “move carefully and avoid breaking stuff.”
What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.
Why Isn’t Everything Interoperable?
The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.
And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.
But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.
The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.
Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.
Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.
There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:
It might be too costly to implement and/or maintain.
It might prescribe a certain product design and prevent experimentation and innovation.
It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
It might increase the risk of something not working, or of security breaches.
It might prevent certain pricing models that increase output.
It might compromise some element of the product or service that benefits specifically from not being interoperable.
In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.
In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.
Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).
But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.
Interoperability for Digital Platforms
Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.
It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.
A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.
Interoperability and Contact-Tracing Apps
A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.
No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.
In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.
It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.
A ‘Super Tool’ for Digital Market Intervention?
The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.
The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.
The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.
Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.
Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”
But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.
Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.
Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.
In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?
None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.
The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.
In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.
Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.
One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.
If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.
But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.
Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.
But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.
Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.
So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.
The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.
Publicly traded companies with more than $100 billion in market capitalization
Deere & Co.
Eli Lilly and Co.
Philip Morris International
Procter & Gamble
Advanced Micro Devices
General Electric Co.
Johnson & Johnson
Bank of America
The Coca-Cola Co.
The Estée Lauder Cos.
The Home Depot
The Walt Disney Co.
Bristol Myers Squibb
Thermo Fisher Scientific
Merck & Co.
Union Pacific Corp.
Charles Schwab Corp.
United Parcel Service
Zoom Video Communications
Publicly traded companies with more than $100 billion in current assets
American International Group
Citizens Financial Group
PNC Financial Services
Regions Financial Corp.
Fifth Third Bank
State Street Corp.
First Republic Bank
Ford Motor Co.
Publicly traded companies with more than $100 billion in sales
This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges.]
A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.
Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.
But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.
The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.
The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.
Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).
Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.
The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.
Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules? In antitrust, vagueness in critical statutory terms allows room for change. The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.
The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.
Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.
Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?
Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).
There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.
Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.
Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”
Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.
Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.
 But some threshold conditions like agreement or dominance might also become dated.
Advocates of legislative action to “reform” antitrust law have already pointed to the U.S. District Court for the District of Columbia’s dismissal of the state attorneys general’s case and the “conditional” dismissal of the Federal Trade Commission’s case against Facebook as evidence that federal antitrust case law is lax and demands correction. In fact, the court’s decisions support the opposite implication.
The Risks of Antitrust by Anecdote
The failure of a well-resourced federal regulator, and more than 45 state attorney-general offices, to avoid dismissal at an early stage of the litigation testifies to the dangers posed by a conclusory approach toward antitrust enforcement that seeks to unravel acquisitions consummated almost a decade ago without even demonstrating the factual predicates to support consideration of such far-reaching interventions. The dangers to the rule of law are self-evident. Irrespective of one’s views on the appropriate direction of antitrust law, this shortcut approach would substitute prosecutorial fiat, ideological predilection, and popular sentiment for decades of case law and agency guidelines grounded in the rigorous consideration of potential evidence of competitive harm.
The paucity of empirical support for the exceptional remedial action sought by the FTC is notable. As the district court observed, there was little systematic effort made to define the economically relevant market or provide objective evidence of market power, beyond the assertion that Facebook has a market share of “in excess of 60%.” Remarkably, the denominator behind that 60%-plus assertion is not precisely defined, since the FTC’s brief does not supply any clear metric by which to measure market share. As the court pointed out, this is a nontrivial task in multi-sided environments in which one side of the potentially relevant market delivers services to users at no charge.
While the point may seem uncontroversial, it is important to re-appreciate why insisting on a rigorous demonstration of market power is critical to preserving a coherent body of law that provides the market with a basis for reasonably anticipating the likelihood of antitrust intervention. At least since the late 1970s, courts have recognized that “big is not always bad” and can often yield cost savings that ultimately redound to consumers’ benefit. That is: firm size and consumer welfare do not stand in inherent opposition. If courts were to abandon safeguards against suits that cannot sufficiently define the relevant market and plausibly show market power, antitrust litigation could easily be used as a tool to punish successful firms that prevail over competitors simply by being more efficient. In other words: antitrust law could become a tool to preserve competitor welfare at the expense of consumer welfare.
The Specter of No-Fault Antitrust Liability
The absence of any specific demonstration of market power suggests deficient lawyering or the inability to gather supporting evidence. Giving the FTC litigation team the benefit of the doubt, the latter becomes the stronger possibility. If that is the case, this implies an effort to persuade courts to adopt a de facto rule of per se illegality for any firm that achieves a certain market share. (The same concept lies behind legislative proposals to bar acquisitions for firms that cross a certain revenue or market capitalization threshold.) Effectively, any firm that reached a certain size would operate under the presumption that it has market power and has secured or maintained such power due to anticompetitive practices, rather than business prowess. This would effectively convert leading digital platforms into quasi-public utilities subject to continuous regulatory intervention. Such an approach runs counter to antitrust law’s mission to preserve, rather than displace, private ordering by market forces.
Even at the high-water point of post-World War II antitrust zealotry (a period that ultimately ended in economic malaise), proposals to adopt a rule of no-fault liability for alleged monopolization were rejected. This was for good reason. Any such rule would likely injure consumers by precluding them from enjoying the cost savings that result from the “sweet spot” scenario in which the scale and scope economies of large firms are combined with sufficiently competitive conditions to yield reduced prices and increased convenience for consumers. Additionally, any such rule would eliminate incumbents’ incentives to work harder to offer consumers reduced prices and increased convenience, since any market share preserved or acquired as a result would simply invite antitrust scrutiny as a reward.
Remembering Why Market Power Matters
To be clear, this is not to say that “Big Tech” does not deserve close antitrust scrutiny, does not wield market power in certain segments, or has not potentially engaged in anticompetitive practices. The fundamental point is that assertions of market power and anticompetitive conduct must be demonstrated, rather than being assumed or “proved” based largely on suggestive anecdotes.
Perhaps market power will be shown sufficiently in Facebook’s case if the FTC elects to respond to the court’s invitation to resubmit its brief with a plausible definition of the relevant market and indication of market power at this stage of the litigation. If that threshold is satisfied, then thorough consideration of the allegedly anticompetitive effect of Facebook’s WhatsApp and Instagram acquisitions may be merited. However, given the policy interest in preserving the market’s confidence in relying on the merger-review process under the Hart-Scott-Rodino Act, the burden of proof on the government should be appropriately enhanced to reflect the significant time that has elapsed since regulatory decisions not to intervene in those transactions.
It would once have seemed mundane to reiterate that market power must be reasonably demonstrated to support a monopolization claim that could lead to a major divestiture remedy. Given the populist thinking that now leads much of the legislative and regulatory discussion on antitrust policy, it is imperative to reiterate the rationale behind this elementary principle.
This principle reflects the fact that, outside collusion scenarios, antitrust law is typically engaged in a complex exercise to balance the advantages of scale against the risks of anticompetitive conduct. At its best, antitrust law weighs competing facts in a good faith effort to assess the net competitive harm posed by a particular practice. While this exercise can be challenging in digital markets that naturally converge upon a handful of leading platforms or multi-dimensional markets that can have offsetting pro- and anti-competitive effects, these are not reasons to treat such an exercise as an anachronistic nuisance. Antitrust cases are inherently challenging and proposed reforms to make them easier to win are likely to endanger, rather than preserve, competitive markets.
In his recent concurrence in Biden v. Knight, Justice Clarence Thomas sketched a roadmap for how to regulate social-media platforms. The animating factor for Thomas, much like for other conservatives, appears to be a sense that Big Tech has exhibited anti-conservative bias in its moderation decisions, most prominently by excluding former President Donald Trump from Twitter and Facebook. The opinion has predictably been greeted warmly by conservative champions of social-media regulation, who believe it shows how states and the federal government can proceed on this front.
Conservatives’ main argument has been that Big Tech needs to be reined in because it is restricting the speech of private individuals. While conservatives traditionally have defended the state-action doctrine and the right to editorial discretion, they now readily find exceptions to both in order to justify regulating social-media companies. But those two First Amendment doctrines have long enshrined an important general principle: private actors can set the rules for speech on their own property. I intend to analyze this principle from a law & economics perspective and show how it benefits society.
Who Balances the Benefits and Costs of Speech?
Like virtually any other human activity, there are benefits and costs to speech and it is ultimately subjective individual preference that determines the value that speech has. The First Amendment protects speech from governmental regulation, with only limited exceptions, but that does not mean all speech is acceptable or must be tolerated. Under the state-action doctrine, the First Amendment only prevents the government from restricting speech.
Some purported defenders of the principle of free speech no longer appear to see a distinction between restraints on speech imposed by the government and those imposed by private actors. But this is surely mistaken, as no one truly believes all speech protected by the First Amendment should be without consequence. In truth, most regulation of speech has always come by informal means—social mores enforced by dirty looks or responsive speech from others.
Moreover, property rights have long played a crucial role in determining speech rules within any given space. If a man were to come into my house and start calling my wife racial epithets, I would not only ask that person to leave but would exercise my right as a property owner to eject the trespasser—if necessary, calling the police to assist me. I similarly could not expect to go to a restaurant and yell at the top of my lungs about political issues and expect them—even as “common carriers” or places of public accommodation—to allow me to continue.
The fact that different costs and benefits must be balanced does not in itself imply who must balance them―or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved.
Knowledge and Decisions, p. 240
When it comes to speech, the balance that must be struck is between one individual’s desire for an audience and that prospective audience’s willingness to play the role. Asking government to use regulation to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors. Rather than incremental decisions regarding how and under what terms individuals may relate to one another—which can evolve over time in response to changes in what individuals find acceptable—government by its nature can only hand down categorical guidelines: “you must allow x, y, and z speech.”
This is particularly relevant in the sphere of social media. Social-media companies are multi-sided platforms. They are profit-seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users could abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social-media companies thus need to maximize the value of their platform by setting rules that keep users engaged.
In the cases of Facebook, Twitter, and YouTube, the platforms have set content-moderation standards that restrict many kinds of speech that are generally viewed negatively by users, even if the First Amendment would foreclose the government from regulating those same types of content. This is a good thing. Social-media companies balance the speech interests of different kinds of users to maximize the value of the platform and, in turn, to maximize benefits to all.
Herein lies the fundamental difference between private action and state action: one is voluntary, and the other based on coercion. If Facebook or Twitter suspends a user for violating community rules, it represents termination of a previously voluntary association. If the government kicks someone out of a public forum for expressing legal speech, that is coercion. The state-action doctrine recognizes this fundamental difference and creates a bright-line rule that courts may police when it comes to speech claims. As Sowell put it:
The courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.
Knowledge and Decisions, p. 244
Markets Produce the Best Moderation Policies
The First Amendment also protects the right of editorial discretion, which means publishers, platforms, and other speakers are free from carrying or transmitting government-compelled speech. Even a newspaper with near-monopoly power cannot be compelled by a right-of-reply statute to carry responses by political candidates to editorials it has published. In other words, not only is private regulation of speech not state action, but in many cases, private regulation is protected by the First Amendment.
There is no reason to think that social-media companies today are in a different position than was the newspaper in Miami Herald v. Tornillo. These companies must determine what, how, and where content is presented within their platform. While this right of editorial discretion protects the moderation decisions of social-media companies, its benefits accrue to society at-large.
Social-media companies’ abilities to differentiate themselves based on functionality and moderation policies are important aspects of competition among them. How each platform is used may differ depending on those factors. In fact, many consumers use multiple social-media platforms throughout the day for different purposes. Market competition, not government power, has enabled internet users (including conservatives!) to have more avenues than ever to get their message out.
Many conservatives remain unpersuaded by the power of markets in this case. They see multiple platforms all engaging in very similar content-moderation policies when it comes to certain touchpoint issues, and thus allege widespread anti-conservative bias and collusion. Neither of those claims have much factual support, but more importantly, the similarity of content-moderation standards may simply be common responses to similar demand structures—not some nefarious and conspiratorial plot.
In other words, if social-media users demand less of the kinds of content commonly considered to be hate speech, or less misinformation on certain important issues, platforms will do their best to weed those things out. Platforms won’t always get these determinations right, but it is by no means clear that forcing them to carry all “legal” speech—which would include not just misinformation and hate speech, but pornographic material, as well—would better serve social-media users. There are always alternative means to debate contestable issues of the day, even if it may be more costly to access them.
Indeed, that content-moderation policies make it more difficult to communicate some messages is precisely the point of having them. There is a subset of protected speech to which many users do not wish to be subject. Moreover, there is no inherent right to have an audience on a social-media platform.
Much of the First Amendment’s economic value lies in how it defines roles in the market for speech. As a general matter, it is not the government’s place to determine what speech should be allowed in private spaces. Instead, the private ordering of speech emerges through the application of social mores and property rights. This benefits society, as it allows individuals to create voluntary relationships built on marginal decisions about what speech is acceptable when and where, rather than centralized decisions made by a governing few and that are difficult to change over time.