Archives For Facebook

The wave of populist antitrust that has been embraced by regulators and legislators in the United States, United Kingdom, European Union, and other jurisdictions rests on the assumption that currently dominant platforms occupy entrenched positions that only government intervention can dislodge. Following this view, Facebook will forever dominate social networking, Amazon will forever dominate cloud computing, Uber and Lyft will forever dominate ridesharing, and Amazon and Netflix will forever dominate streaming. This assumption of platform invincibility is so well-established that some policymakers advocate significant interventions without making any meaningful inquiry into whether a seemingly dominant platform actually exercises market power.

Yet this assumption is not supported by historical patterns in platform markets. It is true that network effects drive platform markets toward “winner-take-most” outcomes. But the winner is often toppled quickly and without much warning. There is no shortage of examples.

In 2007, a columnist in The Guardian observed that “it may already be too late for competitors to dislodge MySpace” and quoted an economist as authority for the proposition that “MySpace is well on the way to becoming … a natural monopoly.” About one year later, Facebook had overtaken MySpace “monopoly” in the social-networking market. Similarly, it was once thought that Blackberry would forever dominate the mobile-communications device market, eBay would always dominate the online e-commerce market, and AOL would always dominate the internet-service-portal market (a market that no longer even exists). The list of digital dinosaurs could go on.

All those tech leaders were challenged by entrants and descended into irrelevance (or reduced relevance, in eBay’s case). This occurred through the force of competition, not government intervention.

Why This Time is Probably Not Different

Given this long line of market precedents, current legislative and regulatory efforts to “restore” competition through extensive intervention in digital-platform markets require that we assume that “this time is different.” Just as that slogan has been repeatedly rebutted in the financial markets, so too is it likely to be rebutted in platform markets. 

There is already supporting evidence. 

In the cloud market, Amazon’s AWS now faces vigorous competition from Microsoft Azure and Google Cloud. In the streaming market, Amazon and Netflix face stiff competition from Disney+ and Apple TV+, just to name a few well-resourced rivals. In the social-networking market, Facebook now competes head-to-head with TikTok and seems to be losing. The market power once commonly attributed to leading food-delivery platforms such as Grubhub, UberEats, and DoorDash is implausible after persistent losses in most cases, and the continuous entry of new services into a rich variety of local and product-market niches.

Those who have advocated antitrust intervention on a fast-track schedule may remain unconvinced by these inconvenient facts. But the market is not. 

Investors have already recognized Netflix’s vulnerability to competition, as reflected by a 35% fall in its stock price on April 20 and a decline of more than 60% over the past 12 months. Meta, Facebook’s parent, also experienced a reappraisal, falling more than 26% on Feb. 3 and more than 35% in the past 12 months. Uber, the pioneer of the ridesharing market, has declined by almost 50% over the past 12 months, while Lyft, its principal rival, has lost more than 60% of its value. These price freefalls suggest that antitrust populists may be pursuing solutions to a problem that market forces are already starting to address.

The Forgotten Curse of the Incumbent

For some commentators, the sharp downturn in the fortunes of the so-called “Big Tech” firms would not come as a surprise.

It has long been observed by some scholars and courts that a dominant firm “carries the seeds of its own destruction”—a phrase used by then-professor and later-Judge Richard Posner, writing in the University of Chicago Law Review in 1971. The reason: a dominant firm is liable to exhibit high prices, mediocre quality, or lackluster innovation, which then invites entry by more adept challengers. However, this view has been dismissed as outdated in digital-platform markets, where incumbents are purportedly protected by network effects and switching costs that make it difficult for entrants to attract users. Depending on the set of assumptions selected by an economic modeler, each contingency is equally plausible in theory.

The plunging values of leading platforms supplies real-world evidence that favors the self-correction hypothesis. It is often overlooked that network effects can work in both directions, resulting in a precipitous fall from market leader to laggard. Once users start abandoning a dominant platform for a new competitor, network effects operating in reverse can cause a “run for the exits” that leaves the leader with little time to recover. Just ask Nokia, the world’s leading (and seemingly unbeatable) smartphone brand until the Apple iPhone came along.

Why Market Self-Correction Outperforms Regulatory Correction

Market self-correction inherently outperforms regulatory correction: it operates far more rapidly and relies on consumer preferences to reallocate market leadership—a result perfectly consistent with antitrust’s mission to preserve “competition on the merits.” In contrast, policymakers can misdiagnose the competitive effects of business practices; are susceptible to the influence of private interests (especially those that are unable to compete on the merits); and often mispredict the market’s future trajectory. For Exhibit A, see the protracted antitrust litigation by the U.S. Department against IBM, which started in 1975 and ended in withdrawal of the suit in 1982. Given the launch of the Apple II in 1977, the IBM PC in 1981, and the entry of multiple “PC clones,” the forces of creative destruction swiftly displaced IBM from market leadership in the computing industry.

Regulators and legislators around the world have emphasized the urgency of taking dramatic action to correct claimed market failures in digital environments, casting aside prudential concerns over the consequences if any such failure proves to be illusory or temporary. 

But the costs of regulatory failure can be significant and long-lasting. Markets must operate under unnecessary compliance burdens that are difficult to modify. Regulators’ enforcement resources are diverted, and businesses are barred from adopting practices that would benefit consumers. In particular, proposed breakup remedies advocated by some policymakers would undermine the scale economies that have enabled platforms to push down prices, an important consideration in a time of accelerating inflation.

Conclusion

The high concentration levels and certain business practices in digital-platform markets certainly raise important concerns as a matter of antitrust (as well as privacy, intellectual property, and other bodies of) law. These concerns merit scrutiny and may necessitate appropriately targeted interventions. Yet, any policy steps should be anchored in the factually grounded analysis that has characterized decades of regulatory and judicial action to implement the antitrust laws with appropriate care. Abandoning this nuanced framework for a blunt approach based on reflexive assumptions of market power is likely to undermine, rather than promote, the public interest in competitive markets.

Sens. Amy Klobuchar (D-Minn.) and Chuck Grassley (R-Iowa)—cosponsors of the American Innovation Online and Choice Act, which seeks to “rein in” tech companies like Apple, Google, Meta, and Amazon—contend that “everyone acknowledges the problems posed by dominant online platforms.”

In their framing, it is simply an acknowledged fact that U.S. antitrust law has not kept pace with developments in the digital sector, allowing a handful of Big Tech firms to exploit consumers and foreclose competitors from the market. To address the issue, the senators’ bill would bar “covered platforms” from engaging in a raft of conduct, including self-preferencing, tying, and limiting interoperability with competitors’ products.

That’s what makes the open letter to Congress published late last month by the usually staid American Bar Association’s (ABA) Antitrust Law Section so eye-opening. The letter is nothing short of a searing critique of the legislation, which the section finds to be poorly written, vague, and departing from established antitrust-law principles.

The ABA, of course, has a reputation as an independent, highly professional, and heterogenous group. The antitrust section’s membership includes not only in-house corporate counsel, but lawyers from nonprofits, consulting firms, federal and state agencies, judges, and legal academics. Given this context, the comments must be read as a high-level judgment that recent legislative and regulatory efforts to “discipline” tech fall outside the legal mainstream and would come at the cost of established antitrust principles, legal precedent, transparency, sound economic analysis, and ultimately consumer welfare.

The Antitrust Section’s Comments

As the ABA Antitrust Law Section observes:

The Section has long supported the evolution of antitrust law to keep pace with evolving circumstances, economic theory, and empirical evidence. Here, however, the Section is concerned that the Bill, as written, departs in some respects from accepted principles of competition law and in so doing risks causing unpredicted and unintended consequences.

Broadly speaking, the section’s criticisms fall into two interrelated categories. The first relates to deviations from antitrust orthodoxy and the principles that guide enforcement. The second is a critique of the AICOA’s overly broad language and ambiguous terminology.

Departing from established antitrust-law principles

Substantively, the overarching concern expressed by the ABA Antitrust Law Section is that AICOA departs from the traditional role of antitrust law, which is to protect the competitive process, rather than choosing to favor some competitors at the expense of others. Indeed, the section’s open letter observes that, out of the 10 categories of prohibited conduct spelled out in the legislation, only three require a “material harm to competition.”

Take, for instance, the prohibition on “discriminatory” conduct. As it stands, the bill’s language does not require a showing of harm to the competitive process. It instead appears to enshrine a freestanding prohibition of discrimination. The bill targets tying practices that are already prohibited by U.S. antitrust law, but while similarly eschewing the traditional required showings of market power and harm to the competitive process. The same can be said, mutatis mutandis, for “self-preferencing” and the “unfair” treatment of competitors.

The problem, the section’s letter to Congress argues, is not only that this increases the teleological chasm between AICOA and the overarching goals and principles of antitrust law, but that it can also easily lead to harmful unintended consequences. For instance, as the ABA Antitrust Law Section previously observed in comments to the Australian Competition and Consumer Commission, a prohibition of pricing discrimination can limit the extent of discounting generally. Similarly, self-preferencing conduct on a platform can be welfare-enhancing, while forced interoperability—which is also contemplated by AICOA—can increase prices for consumers and dampen incentives to innovate. Furthermore, some of these blanket prohibitions are arguably at loggerheads with established antitrust doctrine, such as in, e.g., Trinko, which established that even monopolists are generally free to decide with whom they will deal.

In response to the above, the ABA Antitrust Law Section (reasonably) urges Congress explicitly to require an effects-based showing of harm to the competitive process as a prerequisite for all 10 of the infringements contemplated in the AICOA. This also means disclaiming generalized prohibitions of “discrimination” and of “unfairness” and replacing blanket prohibitions (such as the one for self-preferencing) with measured case-by-case analysis.

Arguably, the reason why the Klobuchar-Grassley bill can so seamlessly exclude or redraw such a central element of antitrust law as competitive harm is because it deliberately chooses to ignore another, preceding one. Namely, the bill omits market power as a requirement for a finding of infringement or for the legislation’s equally crucial designation as a “covered platform.” It instead prescribes size metrics—number of users, market capitalization—to define which platforms are subject to intervention. Such definitions cast an overly wide net that can potentially capture consumer-facing conduct that doesn’t have the potential to harm competition at all.

It is precisely for this reason that existing antitrust laws are tethered to market power—i.e., because it long has been recognized that only companies with market power can harm competition. As John B. Kirkwood of Seattle University School of Law has written:

Market power’s pivotal role is clear…This concept is central to antitrust because it distinguishes firms that can harm competition and consumers from those that cannot.

In response to the above, the ABA Antitrust Law Section (reasonably) urges Congress explicitly to require an effects-based showing of harm to the competitive process as a prerequisite for all 10 of the infringements contemplated in the AICOA. This also means disclaiming generalized prohibitions of “discrimination” and of “unfairness” and replacing blanket prohibitions (such as the one for self-preferencing) with measured case-by-case analysis.

Opaque language for opaque ideas

Another underlying issue is that the Klobuchar-Grassley bill is shot through with indeterminate language and fuzzy concepts that have no clear limiting principles. For instance, in order either to establish liability or to mount a successful defense to an alleged violation, the bill relies heavily on inherently amorphous terms such as “fairness,” “preferencing,” and “materiality,” or the “intrinsic” value of a product. But as the ABA Antitrust Law Section letter rightly observes, these concepts are not defined in the bill, nor by existing antitrust case law. As such, they inject variability and indeterminacy into how the legislation would be administered.

Moreover, it is also unclear how some incommensurable concepts will be weighed against each other. For example, how would concerns about safety and security be weighed against prohibitions on self-preferencing or requirements for interoperability? What is a “core function” and when would the law determine it has been sufficiently “enhanced” or “maintained”—requirements the law sets out to exempt certain otherwise prohibited behavior? The lack of linguistic and conceptual clarity not only explodes legal certainty, but also invites judicial second-guessing into the operation of business decisions, something against which the U.S. Supreme Court has long warned.

Finally, the bill’s choice of language and recent amendments to its terminology seem to confirm the dynamic discussed in the previous section. Most notably, the latest version of AICOA replaces earlier language invoking “harm to the competitive process” with “material harm to competition.” As the ABA Antitrust Law Section observes, this “suggests a shift away from protecting the competitive process towards protecting individual competitors.” Indeed, “material harm to competition” deviates from established categories such as “undue restraint of trade” or “substantial lessening of competition,” which have a clear focus on the competitive process. As a result, it is not unreasonable to expect that the new terminology might be interpreted as meaning that the actionable standard is material harm to competitors.

In its letter, the antitrust section urges Congress not only to define more clearly the novel terminology used in the bill, but also to do so in a manner consistent with existing antitrust law. Indeed:

The Section further recommends that these definitions direct attention to analysis consistent with antitrust principles: effects-based inquiries concerned with harm to the competitive process, not merely harm to particular competitors

Conclusion

The AICOA is a poorly written, misguided, and rushed piece of regulation that contravenes both basic antitrust-law principles and mainstream economic insights in the pursuit of a pre-established populist political goal: punishing the success of tech companies. If left uncorrected by Congress, these mistakes could have potentially far-reaching consequences for innovation in digital markets and for consumer welfare. They could also set antitrust law on a regressive course back toward a policy of picking winners and losers.

The following post was authored by counsel with White & Case LLP, who represented the International Center for Law & Economics (ICLE) in an amicus brief filed on behalf of itself and 12 distinguished law & economics scholars with the U.S. Court of Appeals for the D.C. Circuit in support of affirming U.S. District Court Judge James Boasberg’s dismissal of various States Attorneys General’s antitrust case brought against Facebook (now, Meta Platforms).

Introduction

The States brought an antitrust complaint against Facebook alleging that various conduct violated Section 2 of the Sherman Act. The ICLE brief addresses the States’ allegations that Facebook refused to provide access to an input, a set of application-programming interfaces that developers use in order to access Facebook’s network of social-media users (Facebook’s Platform), in order to prevent those third parties from using that access to export Facebook data to competitors or to compete directly with Facebook.

Judge Boasberg dismissed the States’ case without leave to amend, relying on recent Supreme Court precedent, including Trinko and Linkline, on refusals to deal. The Supreme Court strongly disfavors forced sharing, as shown by its decisions that recognize very few exceptions to the ability of firms to deal with whom they choose. Most notably, Aspen Skiing Co. v. Aspen Highlands Skiing is a 1985 decision recognizing an exception to the general rule that firms may deal with whom they want that was limited, though not expressly overturned, by Trinko in 2004. The States appealed to the D.C. Circuit on several grounds, including by relying on Aspen Skiing, and advocating for a broader view of refusals to deal than dictated by current jurisprudence. 

ICLE’s brief addresses whether the District Court was correct to dismiss the States’ allegations that Facebook’s Platform policies violated Section 2 of the Sherman Act in light of the voluminous body of precedent and scholarship concerning refusals to deal. ICLE’s brief argues that Judge Boasberg’s opinion is consistent with economic and legal principles, allowing firms to choose with whom they deal. Furthermore, the States’ allegations did not make out a claim under Aspen Skiing, which sets forth extremely narrow circumstances that may constitute an improper refusal to deal.  Finally, ICLE takes issue with the States’ attempt to create an amorphous legal standard for refusals to deal or otherwise shoehorn their allegations into a “conditional dealing” framework.

Economic Actors Should Be Able to Choose Their Business Partners

ICLE’s basic premise is that firms in a free-market system should be able to choose their business partners. Forcing firms to enter into certain business relationships can have the effect of stifling innovation, because the firm getting the benefit of the forced dealing then lacks incentive to create their own inputs. On the other side of the forced dealing, the owner would have reduced incentives to continue to innovate, invest, or create intellectual property. Forced dealing, therefore, has an adverse effect on the fundamental nature of competition. As the Supreme Court stated in Trinko, this compelled sharing creates “tension with the underlying purpose of antitrust law, since it may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” 

Courts Are Ill-Equipped to Regulate the Kind of Forced Sharing Advocated by the States

ICLE also notes the inherent difficulties of a court’s assessing forced access and the substantial risk of error that could create harm to competition. This risk, ICLE notes, is not merely theoretical and would require the court to scrutinize intricate details of a dynamic industry and determine which decisions are lawful or not. Take the facts of New York v. Facebook: more than 10 million apps and websites had access to Platform during the relevant period and the States took issue with only seven instances where Facebook had allegedly improperly prevented access to Platform. Assessing whether conduct would create efficiency in one circumstance versus another is challenging at best and always risky. As Frank Easterbook wrote: “Anyone who thinks that judges would be good at detecting the few situations in which cooperation would do more good than harm has not studied the history of antitrust.”

Even assuming a court has rightly identified a potentially anticompetitive refusal to deal, it would then be put to the task of remedying it. But imposing a remedy, and in effect assuming the role of a regulator, is similarly complicated. This is particularly true in dynamic, quickly evolving industries, such as social media. This concern is highlighted by the broad injunction the States seek in this case: to “enjoin[] and restrain [Facebook] from continuing to engage in any anticompetitive conduct and from adopting in the future any practice, plan, program, or device having a similar purpose or effect to the anticompetitive actions set forth above.”  Such a remedy would impose conditions on Facebook’s dealings with competitors for years to come—regardless of how the industry evolves.

Courts Should Not Expand Refusal-to-Deal Analysis Beyond the Narrow Circumstances of Aspen Skiing

In light of the principles above, the Supreme Court, as stated in Trinko, “ha[s] been very cautious in recognizing [refusal-to-deal] exceptions, because of the uncertain virtue of forced sharing and the difficulty of identifying and remedying anticompetitive conduct by a single firm.” Various scholars (e.g., Carlton, Meese, Lopatka, Epstein) have analyzed Aspen Skiing consistently with Trinko as, at most, “at or near the boundary of § 2 liability.”

So is a refusal-to-deal claim ever viable?  ICLE argues that refusal-to-deal claims have been rare (rightly so) and, at most, should only go forward under the delineated circumstances in Aspen Skiing. ICLE sets forth the 10th U.S. Circuit’s framework in Novell, which makes clear that “the monopolist’s conduct must be irrational but for its anticompetitive effect.”

  • First, “there must be a preexisting voluntary and presumably profitable course of dealing between the monopolist and rival.”
  • Second, “the monopolist’s discontinuation of the preexisting course of dealing must suggest a willingness to forsake short-term profits to achieve an anti-competitive end.”
  • Finally, even if these two factors are present, the court recognized that “firms routinely sacrifice short-term profits for lots of legitimate reasons that enhance consumer welfare.”

The States seek to broaden Aspen Skiing in order to sinisterize Facebook’s Platform policies, but the facts do not fit. The States do not plead an about-face with respect to Facebook’s Platform policies; the States do not allege that Facebook’s changes to its policies were irrational (particularly in light of the dynamic industry in which Facebook operates); and the States do not allege that Facebook engaged in less efficient behavior with the goal of hurting rivals. Indeed, Facebook changed its policies to retain users—which is essential to its business model (and therefore, rational).

The States try to evade these requirements by arguing for a looser refusal-to-deal standard (and by trying to shoehorn the conduct as “conditional dealing”)—but as ICLE explains, allowing such a claim to go forward would fly in the face of the economic and policy goals upheld by the current jurisprudence. 

Conclusion

The District Court was correct to dismiss the States’ allegations concerning Facebook’s Platform policies. Allowing a claim against Facebook to progress under the circumstances alleged in the States’ complaint would violate the principle that a firm, even one that is a monopolist, should not be held liable for refusing to deal with a certain business partner. The District Court’s decision is in line with key economic principles concerning refusals to deal and consistent with the Supreme Court’s decision in Aspen Skiing. Aspen Skiing is properly read to severely limit the circumstances giving rise to a refusal-to-deal claim, or else risk adverse effects such as reduced incentive to innovate.  

Amici Scholars Signing on to the Brief

(The ICLE brief presents the views of the individual signers listed below. Institutions are listed for identification purposes only.)

Henry Butler
Henry G. Manne Chair in Law and Economics and Executive Director of the Law & Economics Center, Scalia Law School
Daniel Lyons
Professor of Law, Boston College Law School
Richard A. Epstein
Laurence A. Tisch Professor of Law at NY School of Law, the Peter and Kirsten Bedford Senior Lecturer at the Hoover Institution, and the James Parker Hall Distinguished Service Professor Emeritus
Geoffrey A. Manne
President and Founder, International Center for Law & Economics, Distinguished Fellow Northwestern University Center on Law, Business & Economics
Thomas Hazlett
H.H. Macaulay Endowed Professor of Economics and Director of the Information Economy Project, Clemson University
Alan J. Meese
Ball Professor of Law, Co-Director, Center for the Study of Law and Markets, William & Mary Law School
Justin (Gus) Hurwitz
Professor of Law and Menard Director of the Nebraska Governance and Technology Center, University of Nebraska College of Law
Paul H. Rubin
Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
Jonathan Klick
Charles A. Heimbold, Jr. Professor of Law, University of Pennsylvania Carey School of Law; Erasmus Chair of Empirical Legal Studies, Erasmus University Rotterdam
Michael Sykuta
Associate Professor of Economics and Executive Director of Financial Research Institute, University of Missouri Division of Applied Social Sciences
Thomas A. Lambert
Wall Chair in Corporate Law and Governance, University of Missouri Law School
John Yun
Associate Professor of Law and Deputy Executive Director of the Global Antitrust Institute, Scalia Law School

The Federal Trade Commission (FTC) is at it again, threatening new sorts of regulatory interventions in the legitimate welfare-enhancing activities of businesses—this time in the realm of data collection by firms.

Discussion

In an April 11 speech at the International Association of Privacy Professionals’ Global Privacy Summit, FTC Chair Lina Khan set forth a litany of harms associated with companies’ data-acquisition practices. Certainly, fraud and deception with respect to the use of personal data has the potential to cause serious harm to consumers and is the legitimate target of FTC enforcement activity. At the same time, the FTC should take into account the substantial benefits that private-sector data collection may bestow on the public (see, for example, here, here, and here) in order to formulate economically beneficial law-enforcement protocols.

Chair Khan’s speech, however, paid virtually no attention to the beneficial side of data collection. To the contrary, after highlighting specific harmful data practices, Khan then waxed philosophical in condemning private data-collection activities (citations omitted):

Beyond these specific harms, the data practices of today’s surveillance economy can create and exacerbate deep asymmetries of information—exacerbating, in turn, imbalances of power. As numerous scholars have noted, businesses’ access to and control over such vast troves of granular data on individuals can give those firms enormous power to predict, influence, and control human behavior. In other words, what’s at stake with these business practices is not just one’s subjective preference for privacy, but—over the long term—one’s freedom, dignity, and equal participation in our economy and society.

Even if one accepts that private-sector data practices have such transcendent social implications, are the FTC’s philosopher kings ideally equipped to devise optimal policies that promote “freedom, dignity, and equal participation in our economy and society”? Color me skeptical. (Indeed, one could argue that the true transcendent threat to society from fast-growing growing data collection comes not from businesses but, rather, from the government, which unlike private businesses holds a legal monopoly on the right to use or authorize the use of force. This question is, however, beyond the scope of my comments.)

Chair Khan turned from these highfalutin musings to a more prosaic and practical description of her plans for “adapting the commission’s existing authority to address and rectify unlawful data practices.” She stressed “focusing on firms whose business practices cause widespread harm”; “assessing data practices through both a consumer protection and competition lens”; and “designing effective remedies that are informed by the business strategies that specific markets favor and reward.” These suggestions are not inherently problematic, but they need to be fleshed out in far greater detail. For example, there are potentially major consumer-protection risks posed by applying antitrust to “big data” problems (see here, here and here, for example).

Khan ended her presentation by inviting us “to consider how we might need to update our [FTC] approach further yet.” Her suggested “updates” raise significant problems.

First, she stated that the FTC “is considering initiating a rulemaking to address commercial surveillance and lax data security practices.” Even assuming such a rulemaking could withstand legal scrutiny (its best shot would be to frame it as a consumer protection rule, not a competition rule), it would pose additional serious concerns. One-size-fits-all rules prevent consideration of possible economic efficiencies associated with specific data-security and surveillance practices. Thus, some beneficial practices would be wrongly condemned. Such rules would also likely deter firms from experimenting and innovating in ways that could have led to improved practices. In both cases, consumer welfare would suffer.

Second, Khan asserted “the need to reassess the frameworks we presently use to assess unlawful conduct. Specifically, I am concerned that present market realities may render the ‘notice and consent’ paradigm outdated and insufficient.” Accordingly, she recommended that “we should approach data privacy and security protections by considering substantive limits rather than just procedural protections, which tend to create process requirements while sidestepping more fundamental questions about whether certain types of data collection should be permitted in the first place.”  

In support of this startling observation, Khan approvingly cites Daniel Solove’s article “The Myth of the Privacy Paradox,” which claims that “[t]he fact that people trade their privacy for products or services does not mean that these transactions are desirable in their current form. … [T]he mere fact that people make a tradeoff doesn’t mean that the tradeoff is fair, legitimate, or justifiable.”

Khan provides no economic justification for a data-collection ban. The implication that the FTC would consider banning certain types of otherwise legal data collection is at odds with free-market principles and would have disastrous economic consequences for both consumers and producers. It strikes at voluntary exchange, a basic principle of market economics that benefits transactors and enables markets to thrive.

Businesses monetize information provided by consumers to offer a host of goods and services that satisfy consumer interests. This is particularly true in the case of digital platforms. Preventing the voluntary transfer of data from consumers to producers based on arbitrary government concerns about “fairness” (for example) would strike at firms’ ability to monetize data and thereby generate additional consumer and producer surplus. The arbitrary destruction of such potential economic value by government fiat would be the essence of “unfairness.”

In particular, the consumer welfare benefits generated by digital platforms, which depend critically on large volumes of data, are enormous. As Erik Brynjolfsson of the Massachusetts Institute of Technology and his student Avinash Collis explained in a December 2019 article in the Harvard Business Review, such benefits far exceed those measured by conventional GDP. Online choice experiments based on digital-survey techniques enabled the authors “to estimate the consumer surplus for a great variety of goods, including free ones that are missing from GDP statistics.” Brynjolfsson and Collis found, for example, that U.S. consumers derived $231 billion in value from Facebook since its inception in 2004. Furthermore:

[O]ur estimates indicate that the [Facebook] platform generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. In contrast, average revenue per user is only around $140 per year in United States and $44 per year in Europe. In other words, Facebook operates one of the most advanced advertising platforms, yet its ad revenues represent only a fraction of the total consumer surplus it generates. This reinforces research by NYU Stern School’s Michael Spence and Stanford’s Bruce Owen that shows that advertising revenues and consumer surplus are not always correlated: People can get a lot of value from content that doesn’t generate much advertising, such as Wikipedia or email. So it is a mistake to use advertising revenues as a substitute for consumer surplus…

In a similar vein, the authors found that various user-fee-based digital services yield consumer surplus five to ten times what users paid to access them. What’s more:

The effect of consumer surplus is even stronger when you look at categories of digital goods. We conducted studies to measure it for the most popular categories in the United States and found that search is the most valued category (with a median valuation of more than $17,000 a year), followed by email and maps. These categories do not have comparable off-line substitutes, and many people consider them essential for work and everyday life. When we asked participants how much they would need to be compensated to give up an entire category of digital goods, we found that the amount was higher than the sum of the value of individual applications in it. That makes sense, since goods within a category are often substitutes for one another.

In sum, the authors found:

To put the economic contributions of digital goods in perspective, we find that including the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017. During this period, GDP rose by an average of 1.83% a year. Clearly, GDP has been substantially underestimated over that time.

Although far from definitive, this research illustrates how a digital-services model, based on voluntary data transfer and accumulation, has brought about enormous economic welfare benefits. Accordingly, FTC efforts to tamper with such a success story on abstruse philosophical grounds not only would be unwarranted, but would be economically disastrous. 

Conclusion

The FTC clearly plans to focus on “abuses” in private-sector data collection and usage. In so doing, it should hone in on those practices that impose clear harm to consumers, particularly in the areas of deception and fraud. It is not, however, the FTC’s role to restructure data-collection activities by regulatory fiat, through far-reaching inflexible rules and, worst of all, through efforts to ban collection of “inappropriate” information.

Such extreme actions would predictably impose substantial harm on consumers and producers. They would also slow innovation in platform practices and retard efficient welfare-generating business initiatives tied to the availability of broad collections of data. Eventually, the courts would likely strike down most harmful FTC data-related enforcement and regulatory initiatives, but substantial welfare losses (including harm due to a chilling effect on efficient business conduct) would be borne by firms and consumers in the interim. In short, the enforcement “updates” Khan recommends would reduce economic welfare—the opposite of what (one assumes) is intended.

For these reasons, the FTC should reject the chair’s overly expansive “updates.” It should instead make use of technologists, economists, and empirical research to unearth and combat economically harmful data practices. In doing so, the commission should pay attention to cost-benefit analysis and error-cost minimization. One can only hope that Khan’s fellow commissioners promptly endorse this eminently reasonable approach.   

A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.

It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:

How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?

Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).

When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.

As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.

The Shaky Foundations of Attention Markets Theory

Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.

  • First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
  • Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
  • Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).

There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:

This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”

Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:

But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.

Tim Wu makes roughly the same argument:

The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.

The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.

None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.

First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.

This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.

This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.

Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.

This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.

Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:

An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.

In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.

In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.

The Bait and Switch: Qualitative Indicia

These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:

Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method

Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.

This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”

This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.

A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences. 

There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching. 

The Way Forward

The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.

As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.

Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.

Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:

The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.

Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.

In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.

After years of debate and negotiations, European Lawmakers have agreed upon what will most likely be the final iteration of the Digital Markets Act (“DMA”), following the March 24 final round of “trilogue” talks. 

For the uninitiated, the DMA is one in a string of legislative proposals around the globe intended to “rein in” tech companies like Google, Amazon, Facebook, and Apple through mandated interoperability requirements and other regulatory tools, such as bans on self-preferencing. Other important bills from across the pond include the American Innovation and Choice Online Act, the ACCESS Act, and the Open App Markets Act

In many ways, the final version of the DMA represents the worst possible outcome, given the items that were still up for debate. The Commission caved to some of the Parliament’s more excessive demands—such as sweeping interoperability provisions that would extend not only to “ancillary” services, such as payments, but also to messaging services’ basic functionalities. Other important developments include the addition of voice assistants and web browsers to the list of Core Platform Services (“CPS”), and symbolically higher “designation” thresholds that further ensure the act will apply overwhelmingly to just U.S. companies. On a brighter note, lawmakers agreed that companies could rebut their designation as “gatekeepers,” though it remains to be seen how feasible that will be in practice. 

We offer here an overview of the key provisions included in the final version of the DMA and a reminder of the shaky foundations it rests on.

Interoperability

Among the most important of the DMA’s new rules concerns mandatory interoperability among online platforms. In a nutshell, digital platforms that are designated as “gatekeepers” will be forced to make their services “interoperable” (i.e., compatible) with those of rivals. It is argued that this will make online markets more contestable and thus boost consumer choice. But as ICLE scholars have been explaining for some time, this is unlikely to be the case (here, here, and here). Interoperability is not the panacea EU legislators claim it to be. As former ICLE Director of Competition Policy Sam Bowman has written, there are many things that could be interoperable, but aren’t. The reason is that interoperability comes with costs as well as benefits. For instance, it may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to the market and for consumers to be able to choose among them. Economists Michael L. Katz and Carl Shapiro concur:

Although compatibility has obvious benefits, obtaining and maintaining compatibility often involves a sacrifice in terms of product variety or restraints on innovation.

There are other potential downsides to interoperability.  For instance, a given set of interoperable standards might be too costly to implement and/or maintain; it might preclude certain pricing models that increase output; or it might compromise some element of a product or service that offers benefits specifically because it is not interoperable (such as, e.g., security features). Consumers may also genuinely prefer closed (i.e., non-interoperable) platforms. Indeed: “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play that belie simplistic characterizations of one being inherently superior to the other. 

Further, as Sam Bowman observed, narrowing choice through a more curated experience can also be valuable for users, as it frees them from having to research every possible option every time they buy or use some product (if you’re unconvinced, try turning off your spam filter for a couple of days). Instead, the relevant choice consumers exercise might be in choosing among brands. In sum, where interoperability is a desirable feature, consumer preferences will tend to push for more of it. However, it is fundamentally misguided to treat mandatory interoperability as a cure-all elixir or a “super tool” of “digital platform governance.” In a free-market economy, it is not—or, it should not—be up to courts and legislators to substitute for businesses’ product-design decisions and consumers’ revealed preferences with their own, based on diffuse notions of “fairness.” After all, if we could entrust such decisions to regulators, we wouldn’t need markets or competition in the first place.

Of course, it was always clear that the DMA would contemplate some degree of mandatory interoperability – indeed, this was arguably the new law’s biggest selling point. What was up in the air until now was the scope of such obligations. The Commission had initially pushed for a comparatively restrained approach, requiring interoperability “only” in ancillary services, such as payment systems (“vertical interoperability”). By contrast, the European Parliament called for more expansive requirements that would also encompass social-media platforms and other messaging services (“horizontal interoperability”). 

The problem with such far-reaching interoperability requirements is that they are fundamentally out of pace with current privacy and security capabilities. As ICLE Senior Scholar Mikolaj Barczentewicz has repeatedly argued, the Parliament’s insistence on going significantly beyond the original DMA’s proposal and mandating interoperability of messaging services is overly broad and irresponsible. Indeed, as Mikolaj notes, the “likely result is less security and privacy, more expenses, and less innovation.”The DMA’s defensers would retort that the law allows gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament) to protect safety and privacy (it is not yet clear which wording the final version has adopted). Either way, however, the standard may be too high and companies may very well offer lower security to avoid liability for adopting measures that would be judged by the Commission and the courts as going beyond what is “strictly necessary” or “indispensable.” These safeguards will inevitably be all the more indeterminate (and thus ineffectual) if weighed against other vague concepts at the heart of the DMA, such as “fairness.”

Gatekeeper Thresholds and the Designation Process

Another important issue in the DMA’s construction concerns the designation of what the law deems “gatekeepers.” Indeed, the DMA will only apply to such market gatekeepers—so-designated because they meet certain requirements and thresholds. Unfortunately, the factors that the European Commission will consider in conducting this designation process—revenues, market capitalization, and user base—are poor proxies for firms’ actual competitive position. This is not surprising, however, as the procedure is mainly designed to ensure certain high-profile (and overwhelmingly American) platforms are caught by the DMA.

From this perspective, the last-minute increase in revenue and market-capitalization thresholds—from 6.5 billion euros to 7.5 billion euros, and from 65 billion euros to 75 billion euros, respectively—won’t change the scope of the companies covered by the DMA very much. But it will serve to confirm what we already suspected: that the DMA’s thresholds are mostly tailored to catch certain U.S. companies, deliberately leaving out EU and possibly Chinese competitors (see here and here). Indeed, what would have made a difference here would have been lowering the thresholds, but this was never really on the table. Ultimately, tilting the European Union’s playing field against its top trading partner, in terms of exports and trade balance, is economically, politically, and strategically unwise.

As a consolation of sorts, it seems that the Commission managed to squeeze in a rebuttal mechanism for designated gatekeepers. Imposing far-reaching obligations on companies with no  (or very limited) recourse to escape the onerous requirements of the DMA would be contrary to the basic principles of procedural fairness. Still, it remains to be seen how this mechanism will be articulated and whether it will actually be viable in practice.

Double (and Triple?) Jeopardy

Two recent judgments from the European Court of Justice (ECJ)—Nordzucker and bpost—are likely to underscore the unintended effects of cumulative application of both the DMA and EU and/or national competition laws. The bpost decision is particularly relevant, because it lays down the conditions under which cases that evaluate the same persons and the same facts in two separate fields of law (sectoral regulation and competition law) do not violate the principle of ne bis in idem, also known as “double jeopardy.” As paragraph 51 of the judgment establishes:

  1. There must be precise rules to determine which acts or omissions are liable to be subject to duplicate proceedings;
  2. The two sets of proceedings must have been conducted in a sufficiently coordinated manner and within a similar timeframe; and
  3. The overall penalties must match the seriousness of the offense. 

It is doubtful whether the DMA fulfills these conditions. This is especially unfortunate considering the overlapping rules, features, and goals among the DMA and national-level competition laws, which are bound to lead to parallel procedures. In a word: expect double and triple jeopardy to be hotly litigated in the aftermath of the DMA.

Of course, other relevant questions have been settled which, for reasons of scope, we will have to leave for another time. These include the level of fines (up to 10% worldwide revenue, or 20% in the case of repeat offenses); the definition and consequences of systemic noncompliance (it seems that the Parliament’s draconian push for a general ban on acquisitions in case of systemic noncompliance has been dropped); and the addition of more core platform services (web browsers and voice assistants).

The DMA’s Dubious Underlying Assumptions

The fuss and exhilaration surrounding the impending adoption of the EU’s most ambitious competition-related proposal in decades should not obscure some of the more dubious assumptions which underpin it, such as that:

  1. It is still unclear that intervention in digital markets is necessary, let alone urgent.
  2. Even if it were clear, there is scant evidence to suggest that tried and tested ex post instruments, such as those envisioned in EU competition law, are not up to the task.
  3. Even if the prior two points had been established beyond any reasonable doubt (which they haven’t), it is still far from clear that DMA-style ex ante regulation is the right tool to address potential harms to competition and to consumers that arise in digital markets.

It is unclear that intervention is necessary

Despite a mounting moral panic around and zealous political crusading against Big Tech (an epithet meant to conjure antipathy and distrust), it is still unclear that intervention in digital markets is necessary. Much of the behavior the DMA assumes to be anti-competitive has plausible pro-competitive justifications. Self-preferencing, for instance, is a normal part of how platforms operate, both to improve the value of their core products and to earn returns to reinvest in their development. As ICLE’s Dirk Auer points out, since platforms’ incentives are to maximize the value of their entire product ecosystem, those that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product (the example of Facebook’s integration of Instagram is a case in point). Thus, while self-preferencing may, in some cases, be harmful, a blanket presumption of harm is thoroughly unwarranted

Similarly, the argument that switching costs and data-related increasing returns to scale (in fact, data generally entails diminishing returns) have led to consumer lock-in and thereby raised entry barriers has also been exaggerated to epic proportions (pun intended). As we have discussed previously, there are plenty of counterexamples where firms have easily overcome seemingly “insurmountable” barriers to entry, switching costs, and network effects to disrupt incumbents. 

To pick a recent case: how many of us had heard of Zoom before the pandemic? Where was TikTok three years ago? (see here for a multitude of other classic examples, including Yahoo and Myspace).

Can you really say, with a straight face, that switching costs between messaging apps are prohibitive? I’m not even that active and I use at least six such apps on a daily basis: Facebook Messenger, Whatsapp, Instagram, Twitter, Viber, Telegram, and Slack (it took me all of three minutes to download and start using Slack—my newest addition). In fact, chances are that, like me, you have always multihomed nonchalantly and had never even considered that switching costs were impossibly high (or that they were a thing) until the idea that you were “locked-in” by Big Tech was drilled into your head by politicians and other busybodies looking for trophies to adorn their walls.

What about the “unprecedented,” quasi-fascistic levels of economic concentration? First, measures of market concentration are sometimes anchored in flawed methodology and market definitions  (see, e.g., Epic’s insistence that Apple is a monopolist in the market for operating systems, conveniently ignoring that competition occurs at the smartphone level, where Apple has a worldwide market share of 15%—see pages 45-46 here). But even if such measurements were accurate, high levels of concentration don’t necessarily mean that firms do not face strong competition. In fact, as Nicolas Petit has shown, tech companies compete vigorously against each other across markets.

But perhaps the DMA’s raison d’etre rests less on market failure, but rather on a legal or enforcement failure? This, too, is misguided.

EU competition law is already up to the task

As Giuseppe Colangelo has argued persuasively (here and here), it is not at all clear that ex post competition regulation is insufficient to tackle anti-competitive behavior in the digital sector:

Ongoing antitrust investigations demonstrate that standard competition law still provides a flexible framework to scrutinize several practices described as new and peculiar to app stores. 

The recent Google Shopping decision, in which the Commission found that Google had abused its dominant position by preferencing its own online-shopping service in Google Search results, is a case in point (the decision was confirmed by the General Court and is now pending review before the European Court of Justice). The “self-preferencing” category has since been applied by other EU competition authorities. The Italian competition authority, for instance, fined Amazon 1 billion euros for preferencing its own distribution service, Fulfilled by Amazon, on the Amazon marketplace (i.e., Amazon.it). Thus, Article 102, which includes prohibitions on “applying dissimilar conditions to similar transactions,” appears sufficiently flexible to cover self-preferencing, as well as other potentially anti-competitive offenses relevant to digital markets (e.g., essential facilities).

For better or for worse, EU competition law has historically been sufficiently pliable to serve a range of goals and values. It has also allowed for experimentation and incorporated novel theories of harm and economic insights. Here, the advantage of competition law is that it allows for a more refined, individualized approach that can avoid some of the pitfalls of applying a one-size fits all model across all digital platforms. Those pitfalls include: harming consumers, jeopardizing the business models of some of the most successful and pro-consumer companies in existence, and ignoring the differences among platforms, such as between Google and Apple’s app stores. I turn to these issues next.

Ex ante regulation probably isn’t the right tool

Even if it were clear that intervention is necessary and that existing competition law was insufficient, it is not clear that the DMA is the right regulatory tool to address any potential harms to competition and consumers that may arise in the digital markets. Here, legislators need to be wary of unintended consequences, trade-offs, and regulatory fallibility. For one, It is possible that the DMA will essentially consolidate the power of tech platforms, turning them into de facto public utilities. This will not foster competition, but rather will make smaller competitors systematically dependent on so-called gatekeepers. Indeed, why become the next Google if you can just free ride off of the current Google? Why download an emerging messaging app if you can already interact with its users through your current one? In a way, then, the DMA may become a self-fulfilling prophecy. 

Moreover, turning closed or semi-closed platforms such as the iOS into open platforms more akin to Android blurs the distinctions among products and dampens interbrand competition. It is a supreme paradox that interoperability and sideloading requirements purportedly give users more choice by taking away the option of choosing a “walled garden” model. As discussed above, overriding the revealed preferences of millions of users is neither pro-competitive nor pro-consumer (but it probably favors some competitors at the expense of those two things). 

Nor are many of the other obligations contemplated in the DMA necessarily beneficial to consumers. Do users really not want to have default apps come preloaded on their devices and instead have to download and install them manually? Ditto for operating systems. What is the point of an operating system if it doesn’t come with certain functionalities, such as a web browser? What else should we unbundle—keyboard on iOS? Flashlight? Do consumers really want to choose from dozens of app stores when turning on their new phone for the first time? Do they really want to have their devices cluttered with pointless split-screens? Do users really want to find all their contacts (and be found by all their contacts) across all messaging services? (I switched to Viber because I emphatically didn’t.) Do they really want to have their privacy and security compromised because of interoperability requirements?Then there is the question of regulatory fallibility. As Alden Abott has written on the DMA and other ex ante regulatory proposals aimed at “reining in” tech companies:

Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example). 

This brings us back to the second point: without evidence that antitrust law is “not up to the task,” far-reaching and untested regulatory initiatives with potentially high error costs are put forth as superior to long-established, consumer-based antitrust enforcement. Yes, antitrust may have downsides (e.g., relative indeterminacy and slowness), but these pale in comparison to the DMA’s (e.g., large error costs resulting from high information requirements, rent-seeking, agency capture).

Conclusion

The DMA is an ambitious piece of regulation purportedly aimed at ensuring “fair and open digital markets.” This implies that markets are not fair and open; or that they risk becoming unfair and closed absent far-reaching regulatory intervention at EU level. However, it is unclear to what extent such assumptions are borne out by the reality of markets. Are digital markets really closed? Are they really unfair? If so, is it really certain that regulation is necessary? Has antitrust truly proven insufficient? It also implies that DMA-style ex ante regulation is necessary to tackle it, and that the costs won’t outweigh the benefits. These are heroic assumptions that have never truly been seriously put to the test. 

Considering such brittle empirical foundations, the DMA was always going to be a contentious piece of legislation. However, there was always the hope that EU legislators would show restraint in the face of little empirical evidence and high error costs. Today, these hopes have been dashed. With the adoption of the DMA, the Commission, Council, and the Parliament have arguably taken a bad piece of legislation and made it worse. The interoperability requirements in messaging services, which are bound to be a bane for user privacy and security, are a case in point.

After years trying to anticipate the whims of EU legislators, we finally know where we’re going, but it’s still not entirely sure why we’re going there.

As the European Union’s Digital Markets Act (DMA) has entered the final stage of its approval process, one matter the inter-institutional negotiations appears likely to leave unresolved is how the DMA’s the relationship with competition law affects the very rationale and legal basis for the intervention. 

The DMA is explicitly grounded on the questionable assumption that competition law alone is insufficient to rein in digital gatekeepers. Accordingly, EU lawmakers have declared the proposal to be a necessary regulatory intervention that will complement antitrust rules by introducing a set of ex ante obligations.

To support this line of reasoning, the DMA’s drafters insist that it protects a different legal interest from antitrust. Indeed, the intervention is ostensibly grounded in Article 114 of the Treaty on the Functioning of the European Union (TFEU), rather than Article 103—the article that spells out the implementation of competition law. Pursuant to Article 114, the DMA opts for centralized enforcement at the EU level to ensure harmonized implementation of the new rules.

It has nonetheless been clear from the very beginning that the DMA lacks a distinct purpose. Indeed, the interests it nominally protects (the promotion of fairness and contestability) do not differ from the substance and scope of competition law. The European Parliament has even suggested that the law’s aims should also include fostering innovation and increasing consumer welfare, which also are within the purview of competition law. Moreover, the DMA’s obligations focus on practices that have already been the subject of past and ongoing antitrust investigations.

Where the DMA differs in substance from competition law is simply that it would free enforcers from the burden of standard antitrust analysis. The law is essentially a convenient shortcut that would dispense with the need to define relevant markets, prove dominance, and measure market effects (see here). It essentially dismisses economic analysis and the efficiency-oriented consumer welfare test in order to lower the legal standards and evidentiary burdens needed to bring an investigation.

Acknowledging the continuum between competition law and the DMA, the European Competition Network and some member states (self-appointed as “friends of an effective DMA”) have proposed empowering national competition authorities (NCAs) to enforce DMA obligations.

Against this background, my new ICLE working paper pursues a twofold goal. First, it aims to show how, because of its ambiguous relationship with competition law, the DMA falls short of its goal of preventing regulatory fragmentation. Moreover, despite having significant doubts about the DMA’s content and rationale, I argue that fully centralized enforcement at the EU level should be preserved and that frictions with competition law would be better confined by limiting the law’s application to a few large platforms that are demonstrably able to orchestrate an ecosystem.

Welcome to the (Regulatory) Jungle

The DMA will not replace competition rules. It will instead be implemented alongside them, creating several overlapping layers of regulation. Indeed, my paper broadly illustrates how the very same practices that are targeted by the DMA may also be investigated by NCAs under European and national-level competition laws, under national competition laws specific to digital markets, and under national rules on economic dependence.

While the DMA nominally prohibits EU member states from imposing additional obligations on gatekeepers, member states remain free to adapt their competition laws to digital markets in accordance with the leeway granted by Article 3(3) of the Modernization Regulation. Moreover, NCAs may be eager to exploit national rules on economic dependence to tackle perceived imbalances of bargaining power between online platforms and their business counterparties.

The risk of overlap with competition law is also fostered by the DMA’s designation process, which may further widen the law’s scope in the future in terms of what sorts of digital services and firms may fall under the law’s rubric. As more and more industries explore platform business models, the DMA would—without some further constraints on its scope—be expected to cover a growing number of firms, including those well outside Big Tech or even native tech companies.

As a result, the European regulatory landscape could become even more fragmented in the post-DMA world. The parallel application of the DMA and antitrust rules poses the risks of double jeopardy (see here) and of conflicting decisions.

A Fully Centralized and Ecosystem-Based Regulatory Regime

To counter the risk that digital-market activity will be subject to regulatory double jeopardy and conflicting decisions across EU jurisdictions, DMA enforcement should not only be fully centralized at the EU level, but that centralization should be strengthened. This could be accomplished by empowering the Commission with veto rights, as was requested by the European Parliament.

This veto power should certainly extend to national measures targeting gatekeepers that run counter to the DMA or to decisions adopted by the Commission under the DMA. But it should also include prohibiting national authorities from carrying out investigations on their own initiative without prior authorization by the Commission.

Moreover, it will also likely be necessary to significantly redefine the DMA’s scope. Notably, EU leaders could mitigate the risk of fragmentation from the DMA’s frictions with competition law by circumscribing the law to ecosystem-related issues. This would effectively limit its application to a few large platforms who are demonstrably able to orchestrate an ecosystem. It also would reinstate the DMA’s original justification, which was to address the emergence of a few large platforms who are able act as gatekeepers and enjoy an entrenched position as a result of conglomerate ecosystems.

Changes to the designation process should also be accompanied by confining the list of ex ante obligations the law imposes. These should reflect relevant differences in platforms’ business models and be tailored to the specific firm under scrutiny, rather than implementing a one-size-fits-all approach.

There are compelling arguments against the policy choice to regulate platforms and their ecosystems like utilities. The suggested adaptations would at least acknowledge the regulatory nature of the DMA, removing the suspicion that it is just an antitrust intervention vested by regulation.

During the exceptional rise in stock-market valuations from March 2020 to January 2022, both equity investors and antitrust regulators have implicitly agreed that so-called “Big Tech” firms enjoyed unbeatable competitive advantages as gatekeepers with largely unmitigated power over the digital ecosystem.

Investors bid up the value of tech stocks to exceptional levels, anticipating no competitive threat to incumbent platforms. Antitrust enforcers and some legislators have exhibited belief in the same underlying assumption. In their case, it has spurred advocacy of dramatic remedies—including breaking up the Big Tech platforms—as necessary interventions to restore competition. 

Other voices in the antitrust community have been more circumspect. A key reason is the theory of contestable markets, developed in the 1980s by the late William Baumol and other economists, which holds that even extremely large market shares are at best a potential indicator of market power. To illustrate, consider the extreme case of a market occupied by a single firm. Intuitively, the firm would appear to have unqualified pricing power. Not so fast, say contestable market theorists. Suppose entry costs into the market are low and consumers can easily move to other providers. This means that the apparent monopolist will act as if the market is populated by other competitors. The takeaway: market share alone cannot demonstrate market power without evidence of sufficiently strong barriers to market entry.

While regulators and some legislators have overlooked this inconvenient principle, it appears the market has not. To illustrate, look no further than the Feb. 3 $230 billion crash in the market value of Meta Platforms—parent company of Facebook, Instagram, and WhatsApp, among other services.

In its antitrust suit against Meta, the Federal Trade Commission (FTC) has argued that Meta’s Facebook service enjoys a social-networking monopoly, a contention that the judge in the case initially rejected in June 2021 as so lacking in factual support that the suit was provisionally dismissed. The judge’s ruling (which he withdrew last month, allowing the suit to go forward after the FTC submitted a revised complaint) has been portrayed as evidence for the view that existing antitrust law sets overly demanding evidentiary standards that unfairly shelter corporate defendants. 

Yet, the record-setting single-day loss in Meta’s value suggests the evidentiary standard is set just about right and the judge’s skepticism was fully warranted. Consider one of the principal reasons behind Meta’s plunge in value: its service had suffered substantial losses of users to TikTok, a formidable rival in a social-networking market in which the FTC claims that Facebook faces no serious competition. The market begs to differ. In light of the obvious competitive threat posed by TikTok and other services, investors reassessed Facebook’s staying power, which was then reflected in its owner Meta’s downgraded stock price.

Just as the investment bubble that had supported the stock market’s case for Meta has popped, so too must the regulatory bubble that had supported the FTC’s antitrust case against it. Investors’ reevaluation rebuts the FTC’s strained market definition that had implausibly excluded TikTok as a competitor.

Even more fundamentally, the market’s assessment shows that Facebook’s users face nominal switching costs—in which case, its leadership position is contestable and the Facebook “monopoly” is not much of a monopoly. While this conclusion might seem surprising, Facebook’s vulnerability is hardly exceptional: Nokia, Blackberry, AOL, Yahoo, Netscape, and PalmPilot illustrate how often seemingly unbeatable tech leaders have been toppled with remarkable speed.

The unraveling of the FTC’s case against what would appear to be an obviously dominant platform should be a wake-up call for those policymakers who have embraced populist antitrust’s view that existing evidentiary requirements, which minimize the risk of “false positive” findings of anticompetitive conduct, should be set aside as an inconvenient obstacle to regulatory and judicial intervention. 

None of this should be interpreted to deny that concentration levels in certain digital markets raise significant antitrust concerns that merit close scrutiny. In particular, regulators have overlooked how some leading platforms have devalued intellectual-property rights in a manner that distorts technology and content markets by advantaging firms that operate integrated product and service ecosystems while disadvantaging firms that specialize in supplying the technological and creative inputs on which those ecosystems rely.  

The fundamental point is that potential risks to competition posed by any leading platform’s business practices can be assessed through rigorous fact-based application of the existing toolkit of antitrust analysis. This is critical to evaluate whether a given firm likely occupies a transitory, rather than durable, leadership position. The plunge in Meta’s stock in response to a revealed competitive threat illustrates the perils of discarding that surgical toolkit in favor of a blunt “big is bad” principle.

Contrary to what has become an increasingly common narrative in policy discussions and political commentary, the existing framework of antitrust analysis was not designed by scholars strategically acting to protect “big business.” Rather, this framework was designed and refined by scholars dedicated to rationalizing, through the rigorous application of economic principles, an incoherent body of case law that had often harmed consumers by shielding incumbents against threats posed by more efficient rivals. The legal shortcuts being pursued by antitrust populists to detour around appropriately demanding evidentiary requirements are writing a “back to the future” script that threatens to return antitrust law to that unfortunate predicament.

The Senate Judiciary Committee is set to debate S. 2992, the American Innovation and Choice Online Act (or AICOA) during a markup session Thursday. If passed into law, the bill would force online platforms to treat rivals’ services as they would their own, while ensuring their platforms interoperate seamlessly.

The bill marks the culmination of misguided efforts to bring Big Tech to heel, regardless of the negative costs imposed upon consumers in the process. ICLE scholars have written about these developments in detail since the bill was introduced in October.

Below are 10 significant misconceptions that underpin the legislation.

1. There Is No Evidence that Self-Preferencing Is Generally Harmful

Self-preferencing is a normal part of how platforms operate, both to improve the value of their core products and to earn returns so that they have reason to continue investing in their development.

Platforms’ incentives are to maximize the value of their entire product ecosystem, which includes both the core platform and the services attached to it. Platforms that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product. Those that preference inferior products end up hurting their attractiveness to users of their “core” product, exposing themselves to competition from rivals.

As Geoff Manne concludes, the notion that it is harmful (notably to innovation) when platforms enter into competition with edge providers is entirely speculative. Indeed, a range of studies show that the opposite is likely true. Platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.

Consider a few examples from the empirical literature:

  1. Li and Agarwal (2017) find that Facebook’s integration of Instagram led to a significant increase in user demand both for Instagram itself and for the entire category of photography apps. Instagram’s integration with Facebook increased consumer awareness of photography apps, which benefited independent developers, as well as Facebook.
  2. Foerderer, et al. (2018) find that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally.
  3. Cennamo, et al. (2018) find that video games offered by console firms often become blockbusters and expand the consoles’ installed base. As a result, these games increase the potential for all independent game developers to profit from their games, even in the face of competition from first-party games.
  4. Finally, while Zhu and Liu (2018) is often held up as demonstrating harm from Amazon’s competition with third-party sellers on its platform, its findings are actually far from clear-cut. As co-author Feng Zhu noted in the Journal of Economics & Management Strategy: “[I]f Amazon’s entries attract more consumers, the expanded customer base could incentivize more third‐ party sellers to join the platform. As a result, the long-term effects for consumers of Amazon’s entry are not clear.”

2. Interoperability Is Not Costless

There are many things that could be interoperable, but aren’t. The reason not everything is interoperable is because interoperability comes with costs, as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds.

As Sam Bowman has observed, there are often costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen: consumers will choose products that are not interoperable.

In short, we cannot infer from the mere absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

3. Consumers Often Prefer Closed Ecosystems

Digital markets could have taken a vast number of shapes. So why have they gravitated toward the very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones?

Indeed, if recent commentary is to be believed, it is the latter that should succeed, because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into that breach. But this does not seem to be happening in the digital economy.

The naïve answer is to say that the absence of “open” systems is precisely the problem. What’s harder is to try to actually understand why. As I have written, there are many reasons that consumers might prefer “closed” systems, even when they have to pay a premium for them.

Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and on consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform.

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision.

They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire.

Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. What some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said it best when he quipped that economists always find a monopoly explanation for things that they simply fail to understand.

4. Data Portability Can Undermine Security and Privacy

As explained above, platforms that are more tightly controlled can be regulated by the platform owner to avoid some of the risks present in more open platforms. Apple’s App Store, for example, is a relatively closed and curated platform, which gives users assurance that apps will meet a certain standard of security and trustworthiness.

Along similar lines, there are privacy issues that arise from data portability. Even a relatively simple requirement to make photos available for download can implicate third-party interests. Making a user’s photos more broadly available may tread upon the privacy interests of friends whose faces appear in those photos. Importing those photos to a new service potentially subjects those individuals to increased and un-bargained-for security risks.

As Sam Bowman and Geoff Manne observe, this is exactly what happened with Facebook and its Social Graph API v1.0, ultimately culminating in the Cambridge Analytica scandal. Because v1.0 of Facebook’s Social Graph API permitted developers to access information about a user’s friends without consent, it enabled third-party access to data about exponentially more users. It appears that some 270,000 users granted data access to Cambridge Analytica, from which the company was able to obtain information on 50 million Facebook users.

In short, there is often no simple solution to implement interoperability and data portability. Any such program—whether legally mandated or voluntarily adopted—will need to grapple with these and other tradeoffs.

5. Network Effects Are Rarely Insurmountable

Several scholars in recent years have called for more muscular antitrust intervention in networked industries on grounds that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in and raise entry barriers for potential rivals (see here, here, and here). But there are countless counterexamples where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I wrote in April 2019 (a year before the COVID-19 pandemic):

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Geoff Manne and Alec Stapp have put forward a multitude of other examples,  including: the demise of Yahoo; the disruption of early instant-messaging applications and websites; and MySpace’s rapid decline. In all of these cases, outcomes did not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network-effects theory, they eviscerate the belief, common in antitrust circles, that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. The question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet, this question is systematically omitted from most policy discussions.

6. Profits Facilitate New and Exciting Platforms

As I wrote in August 2020, the relatively closed model employed by several successful platforms (notably Apple’s App Store, Google’s Play Store, and the Amazon Retail Platform) allows previously unknown developers/retailers to rapidly expand because (i) users do not have to fear their apps contain some form of malware and (ii) they greatly reduce payments frictions, most notably security-related ones.

While these are, indeed, tremendous benefits, another important upside seems to have gone relatively unnoticed. The “closed” business model also gives firms significant incentives to develop new distribution mediums (smart TVs spring to mind) and to improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.

The economics of two-sided markets are enlightening here. For example, Apple and Google’s app stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks.” That is, they compete aggressively (among themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users.

This dynamic gives firms significant incentive to continue to attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video, and games was one of the driving forces behind the launch of the iPad.

This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms, as would likely be the case under the American Innovation and Choice Online Act.

7. Large Market Share Does Not Mean Anticompetitive Outcomes

Scholars routinely cite the putatively strong concentration of digital markets to argue that Big Tech firms do not face strong competition. But this is a non sequitur. Indeed, as economists like Joseph Bertrand and William Baumol have shown, what matters is not whether markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, that alone will discipline incumbents’ behavior.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

Unfortunately, critics’ failure to meaningfully grapple with these issues serves to shape the “conventional wisdom” in tech-policy debates.

8. Vertical Integration Generally Benefits Consumers

Vertical behavior of digital firms—whether through mergers or through contract and unilateral action—frequently arouses the ire of critics of the current antitrust regime. Many such critics point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. But the findings of these few studies are regularly overstated and, even if taken at face value, represent a just minuscule fraction of the collected evidence, which overwhelmingly supports vertical integration.

There is strong and longstanding empirical evidence that vertical integration is competitively benign. This includes widely acclaimed work by economists Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama) and Margaret Slade, whose meta-analysis led them to conclude:

[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.

In short, there is a substantial body of both empirical and theoretical research showing that vertical integration (and the potential vertical discrimination and exclusion to which it might give rise) is generally beneficial to consumers. While it is possible that vertical mergers or discrimination could sometimes cause harm, the onus is on the critics to demonstrate empirically where this occurs. No legitimate interpretation of the available literature would offer a basis for imposing a presumption against such behavior.

9. There Is No Such Thing as Data Network Effects

Although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As was shown by the survey pf the empirical literature that Geoff Manne and I performed (published in the George Mason Law Review), data generally entails diminishing marginal returns:

Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around. Indeed, Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace.

Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

10.  Antitrust Enforcement Has Not Been Lax

The popular narrative has it that lax antitrust enforcement has led to substantially increased concentration, strangling the economy, harming workers, and expanding dominant firms’ profit margins at the expense of consumers. Much of the contemporary dissatisfaction with antitrust arises from a suspicion that overly lax enforcement of existing laws has led to record levels of concentration and a concomitant decline in competition. But both beliefs—lax enforcement and increased anticompetitive concentration—wither under more than cursory scrutiny.

As Geoff Manne observed in his April 2020 testimony to the House Judiciary Committee:

The number of Sherman Act cases brought by the federal antitrust agencies, meanwhile, has been relatively stable in recent years, but several recent blockbuster cases have been brought by the agencies and private litigants, and there has been no shortage of federal and state investigations. The vast majority of Section 2 cases dismissed on the basis of the plaintiff’s failure to show anticompetitive effect were brought by private plaintiffs pursuing treble damages; given the incentives to bring weak cases, it cannot be inferred from such outcomes that antitrust law is ineffective. But, in any case, it is highly misleading to count the number of antitrust cases and, using that number alone, to make conclusions about how effective antitrust law is. Firms act in the shadow of the law, and deploy significant legal resources to make sure they avoid activity that would lead to enforcement actions. Thus, any given number of cases brought could be just as consistent with a well-functioning enforcement regime as with an ill-functioning one.

The upshot is that naïvely counting antitrust cases (or the purported lack thereof), with little regard for the behavior that is deterred or the merits of the cases that are dismissed does not tell us whether or not antitrust enforcement levels are optimal.

Further reading:

Law review articles

Issue briefs

Shorter pieces

Recent antitrust forays on both sides of the Atlantic have unfortunate echoes of the oldie-but-baddie “efficiencies offense” that once plagued American and European merger analysis (and, more broadly, reflected a “big is bad” theory of antitrust). After a very short overview of the history of merger efficiencies analysis under American and European competition law, we briefly examine two current enforcement matters “on both sides of the pond” that impliedly give rise to such a concern. Those cases may regrettably foreshadow a move by enforcers to downplay the importance of efficiencies, if not openly reject them.

Background: The Grudging Acceptance of Merger Efficiencies

Not long ago, economically literate antitrust teachers in the United States enjoyed poking fun at such benighted 1960s Supreme Court decisions as Procter & Gamble (following in the wake of Brown Shoe andPhiladelphia National Bank). Those holdings—which not only rejected efficiencies justifications for mergers, but indeed “treated efficiencies more as an offense”—seemed a thing of the past, put to rest by the rise of an economic approach to antitrust. Several early European Commission merger-control decisions also arguably embraced an “efficiencies offense.”  

Starting in the 1980s, the promulgation of increasingly economically sophisticated merger guidelines in the United States led to the acceptance of efficiencies (albeit less then perfectly) as an important aspect of integrated merger analysis. Several practitioners have claimed, nevertheless, that “efficiencies are seldom credited and almost never influence the outcome of mergers that are otherwise deemed anticompetitive.” Commissioner Christine Wilson has argued that the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) still have work to do in “establish[ing] clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.”

In its first few years of merger review, which was authorized in 1989, the European Commission was hostile to merger-efficiency arguments.  In 2004, however, the EC promulgated horizontal merger guidelines that allow for the consideration of efficiencies, but only if three cumulative conditions (consumer benefit, merger specificity, and verifiability) are satisfied. A leading European competition practitioner has characterized several key European Commission merger decisions in the last decade as giving rather short shrift to efficiencies. In light of that observation, the practitioner has advocated that “the efficiency offence theory should, once again, be repudiated by the Commission, in order to avoid deterring notifying parties from bringing forward perfectly valid efficiency claims.”

In short, although the actual weight enforcers accord to efficiency claims is a matter of debate, efficiency justifications are cognizable, subject to constraints, as a matter of U.S. and European Union merger-enforcement policy. Whether that will remain the case is, unfortunately, uncertain, given DOJ and FTC plans to revise merger guidelines, as well as EU talk of convergence with U.S. competition law.

Two Enforcement Matters with ‘Efficiencies Offense’ Overtones

Two Facebook-related matters currently before competition enforcers—one in the United States and one in the United Kingdom—have implications for the possible revival of an antitrust “efficiencies offense” as a “respectable” element of antitrust policy. (I use the term Facebook to reference both the platform company and its corporate parent, Meta.)

FTC v. Facebook

The FTC’s 2020 federal district court monopolization complaint against Facebook, still in the motion to dismiss the amended complaint phase (see here for an overview of the initial complaint and the judge’s dismissal of it), rests substantially on claims that Facebook’s acquisitions of Instagram and WhatsApp harmed competition. As Facebook points out in its recent reply brief supporting its motion to dismiss the FTC’s amended complaint, Facebook appears to be touting merger-related efficiencies in critiquing those acquisitions. Specifically:

[The amended complaint] depends on the allegation that Facebook’s expansion of both Instagram and WhatsApp created a “protective ‘moat’” that made it harder for rivals to compete because Facebook operated these services at “scale” and made them attractive to consumers post-acquisition. . . . The FTC does not allege facts that, left on their own, Instagram and WhatsApp would be less expensive (both are free; Facebook made WhatsApp free); or that output would have been greater (their dramatic expansion at “scale” is the linchpin of the FTC’s “moat” theory); or that the products would be better in any specific way.

The FTC’s concerns about a scale-based merger-related output expansion that benefited consumers and thereby allegedly enhanced Facebook’s market position eerily echoes the commission’s concerns in Procter & Gamble that merger-related cost-reducing joint efficiencies in advertising had an anticompetitive “entrenchment” effect. Both positions, in essence, characterize output-increasing efficiencies as harmful to competition: in other words, as “efficiencies offenses.”

UK Competition and Markets Authority (CMA) v. Facebook

The CMA announced Dec. 1 that it had decided to block retrospectively Facebook’s 2020 acquisition of Giphy, which is “a company that provides social media and messaging platforms with animated GIF images that users can embed in posts and messages. . . .  These platforms license the use of Giphy for its users.”

The CMA theorized that Facebook could harm competition by (1) restricting access to Giphy’s digital libraries to Facebook’s competitors; and (2) prevent Giphy from developing into a potential competitor to Facebook’s display advertising business.

As a CapX analysis explains, the CMA’s theory of harm to competition, based on theoretical speculation, is problematic. First, a behavioral remedy short of divestiture, such as requiring Facebook to maintain open access to its gif libraries, would deal with the threat of restricted access. Indeed, Facebook promised at the time of the acquisition that Giphy would maintain its library and make it widely available. Second, “loss of a single, relatively small, potential competitor out of many cannot be counted as a significant loss for competition, since so many other potential and actual competitors remain.” Third, given the purely theoretical and questionable danger to future competition, the CMA “has blocked this deal on relatively speculative potential competition grounds.”

Apart from the weakness of the CMA’s case for harm to competition, the CMA appears to ignore a substantial potential dynamic integrative efficiency flowing from Facebook’s acquisition of Giphy. As David Teece explains:

Facebook’s acquisition of Giphy maintained Giphy’s assets and furthered its innovation in Facebook’s ecosystem, strengthening that ecosystem in competition with others; and via Giphy’s APIs, strengthening the ecosystems of other service providers as well.

There is no evidence that CMA seriously took account of this integrative efficiency, which benefits consumers by offering them a richer experience from Facebook and its subsidiary Instagram, and which spurs competing ecosystems to enhance their offerings to consumers as well. This is a failure to properly account for an efficiency. Moreover, to the extent that the CMA viewed these integrative benefits as somehow anticompetitive (to the extent that it enhanced Facebook’s competitive position) the improvement of Facebook’s ecosystem could have been deemed a type of “efficiencies offense.”

Are the Facebook Cases Merely Random Straws in the Wind?

It might appear at first blush to be reading too much into the apparent slighting of efficiencies in the two current Facebook cases. Nevertheless, recent policy rhetoric suggests that economic efficiencies arguments (whose status was tenuous at enforcement agencies to begin with) may actually be viewed as “offensive” by the new breed of enforcers.

In her Sept. 22 policy statement on “Vision and Priorities for the FTC,” Chair Lina Khan advocated focusing on the possible competitive harm flowing from actions of “gatekeepers and dominant middlemen,” and from “one-sided [vertical] contract provisions” that are “imposed by dominant firms.” No suggestion can be found in the statement that such vertical relationships often confer substantial benefits on consumers. This hints at a new campaign by the FTC against vertical restraints (as opposed to an emphasis on clearly welfare-inimical conduct) that could discourage a wide range of efficiency-producing contracts.

Chair Khan also sponsored the FTC’s July 2021 rescission of its Section 5 Policy Statement on Unfair Methods of Competition, which had emphasized the primacy of consumer welfare as the guiding principle underlying FTC antitrust enforcement. A willingness to set aside (or place a lower priority on) consumer welfare considerations suggests a readiness to ignore efficiency justifications that benefit consumers.

Even more troubling, a direct attack on the consideration of efficiencies is found in the statement accompanying the FTC’s September 2021 withdrawal of the 2020 Vertical Merger Guidelines:

The statement by the FTC majority . . . notes that the 2020 Vertical Merger Guidelines had improperly contravened the Clayton Act’s language with its approach to efficiencies, which are not recognized by the statute as a defense to an unlawful merger. The majority statement explains that the guidelines adopted a particularly flawed economic theory regarding purported pro-competitive benefits of mergers, despite having no basis of support in the law or market reality.

Also noteworthy is Khan’s seeming interest (found in her writings here, here, and here) in reviving Robinson-Patman Act enforcement. What’s worse, President Joe Biden’s July 2021 Executive Order on Competition explicitly endorses FTC investigation of “retailers’ practices on the conditions of competition in the food industries, including any practices that may violate [the] Robinson-Patman Act” (emphasis added). Those troubling statements from the administration ignore the widespread scholarly disdain for Robinson-Patman, which is almost unanimously viewed as an attack on efficiencies in distribution. For example, in recommending the act’s repeal in 2007, the congressionally established Antitrust Modernization Commission stressed that the act “protects competitors against competition and punishes the very price discounting and innovation and distribution methods that the antitrust otherwise encourage.”

Finally, newly confirmed Assistant Attorney General for Antitrust Jonathan Kanter (who is widely known as a Big Tech critic) has expressed his concerns about the consumer welfare standard and the emphasis on economics in antitrust analysis. Such concerns also suggest, at least by implication, that the Antitrust Division under Kanter’s leadership may manifest a heightened skepticism toward efficiencies justifications.

Conclusion

Recent straws in the wind suggest that an anti-efficiencies hay pile is in the works. Although antitrust agencies have not yet officially rejected the consideration of efficiencies, nor endorsed an “efficiencies offense,” the signs are troubling. Newly minted agency leaders’ skepticism toward antitrust economics, combined with their de-emphasis of the consumer welfare standard and efficiencies (at least in the merger context), suggest that even strongly grounded efficiency explanations may be summarily rejected at the agency level. In foreign jurisdictions, where efficiencies are even less well-established, and enforcement based on mere theory (as opposed to empiricism) is more widely accepted, the outlook for efficiencies stories appears to be no better.     

One powerful factor, however, should continue to constrain the anti-efficiencies movement, at least in the United States: the federal courts. As demonstrated most recently in the 9th U.S. Circuit Court of Appeals’ FTC v. Qualcomm decision, American courts remain committed to insisting on empirical support for theories of harm and on seriously considering business justifications for allegedly suspect contractual provisions. (The role of foreign courts in curbing prosecutorial excesses not grounded in economics, and in weighing efficiencies, depends upon the jurisdiction, but in general such courts are far less of a constraint on enforcers than American tribunals.)

While the DOJ and FTC (and, perhaps to a lesser extent, foreign enforcers) will have to keep the judiciary in mind in deciding to bring enforcement actions, the denigration of efficiencies by the agencies still will have an unfortunate demonstration effect on the private sector. Given the cost (both in resources and in reputational capital) associated with antitrust investigations, and the inevitable discounting for the risk of projects caught up in such inquiries, a publicly proclaimed anti-efficiencies enforcement philosophy will do damage. On the margin, it will lead businesses to introduce fewer efficiency-seeking improvements that could be (wrongly) characterized as “strengthening” or “entrenching” market dominance. Such business decisions, in turn, will be welfare-inimical; they will deny consumers the benefit of efficiencies-driven product and service enhancements, and slow the rate of business innovation.

As such, it is to be hoped that, upon further reflection, U.S. and foreign competition enforcers will see the light and publicly proclaim that they will fully weigh efficiencies in analyzing business conduct. The “efficiencies offense” was a lousy tune. That “oldie-but-baddie” should not be replayed.

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.

The European Commission plans to require Apple to use USB-C charging ports on iPhones to allow interoperability among different chargers (to save, the Commission estimates, two grams of waste per-European per-year). Interoperability demands for forms of interoperability have been at the center of at least two major lawsuits: Epic’s case against Apple and a separate lawsuit against Apple by the app called Coronavirus Reporter. In July, a group of pro-intervention academics published a white paper calling interoperability “the ‘Super Tool’ of Digital Platform Governance.”

What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.

Why Isn’t Everything Interoperable?

The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.

And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.

But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.

The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.

Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.

Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.

There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.

In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).

But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.

Interoperability for Digital Platforms

Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.

It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.

A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.

Interoperability and Contact-Tracing Apps

A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.

No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.

In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.

It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.

A ‘Super Tool’ for Digital Market Intervention?

The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.

The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.

The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.

Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.

Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”

But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “​​the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.

Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.

The authors completely ignore that a smart home interoperability standard has already been developed, backed by a group of 170 companies that include Amazon, Apple, and Google, as well as SmartThings, IKEA, and Samsung. It is open source and, in principle, should allow a Google Home speaker to work with, say, an Amazon Ring doorbell. In markets where consumers really do want interoperability, it can emerge without a regulator requiring it, even if some companies have apparent incentive not to offer it.

If You Build It, They Still Might Not Come

Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.

In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?

None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.

The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.

Others already have noted that the Federal Trade Commission’s (FTC) recently released 6(b) report on the privacy practices of Internet service providers (ISPs) fails to comprehend that widespread adoption of privacy-enabling technology—in particular, Hypertext Transfer Protocol Secure (HTTPS) and DNS over HTTPS (DoH), but also the use of virtual private networks (VPNs)—largely precludes ISPs from seeing what their customers do online.

But a more fundamental problem with the report lies in its underlying assumption that targeted advertising is inherently nefarious. Indeed, much of the report highlights not actual violations of the law by the ISPs, but “concerns” that they could use customer data for targeted advertising much like Google and Facebook already do. The final subheading before the report’s conclusion declares: “Many ISPs in Our Study Can Be At Least As Privacy-Intrusive as Large Advertising Platforms.”

The report does not elaborate on why it would be bad for ISPs to enter the targeted advertising market, which is particularly strange given the public focus regulators have shone in recent months on the supposed dominance of Google, Facebook, and Amazon in online advertising. As the International Center for Law & Economics (ICLE) has argued in past filings on the issue, there simply is no justification to apply sector-specific regulations to ISPs for the mere possibility that they will use customer data for targeted advertising.

ISPs Could be Competition for the Digital Advertising Market

It is ironic to witness FTC warnings about ISPs engaging in targeted advertising even as there are open antitrust cases against Google for its alleged dominance of the digital advertising market. In fact, news reports suggest the U.S. Justice Department (DOJ) is preparing to join the antitrust suits against Google brought by state attorneys general. An obvious upshot of ISPs engaging in a larger amount of targeted advertising if that they could serve as a potential source of competition for Google, Facebook, and Amazon.

Despite the fears raised in the 6(b) report of rampant data collection for targeted ads, ISPs are, in fact, just a very small part of the $152.7 billion U.S. digital advertising market. As the report itself notes: “in 2020, the three largest players, Google, Facebook, and Amazon, received almost two-third of all U.S. digital advertising,” while Verizon pulled in just 3.4% of U.S. digital advertising revenues in 2018.

If the 6(b) report is correct that ISPs have access to troves of consumer data, it raises the question of why they don’t enjoy a bigger share of the digital advertising market. It could be that ISPs have other reasons not to engage in extensive advertising. Internet service provision is a two-sided market. ISPs could (and, over the years in various markets, some have) rely on advertising to subsidize Internet access. That they instead rely primarily on charging users directly for subscriptions may tell us something about prevailing demand on either side of the market.

Regardless of the reasons, the fact that ISPs have little presence in digital advertising suggests that it would be a misplaced focus for regulators to pursue industry-specific privacy regulation to crack down on ISP data collection for targeted advertising.

What’s the Harm in Targeted Advertising, Anyway?

At the heart of the FTC report is the commission’s contention that “advertising-driven surveillance of consumers’ online activity presents serious risks to the privacy of consumer data.” In Part V.B of the report, five of the six risks the FTC lists as associated with ISP data collection are related to advertising. But the only argument the report puts forth for why targeted advertising would be inherently pernicious is the assertion that it is contrary to user expectations and preferences.

As noted earlier, in a two-sided market, targeted ads could allow one side of the market to subsidize the other side. In other words, ISPs could engage in targeted advertising in order to reduce the price of access to consumers on the other side of the market. This is, indeed, one of the dominant models throughout the Internet ecosystem, so it wouldn’t be terribly unusual.

Taking away ISPs’ ability to engage in targeted advertising—particularly if it is paired with rumored net neutrality regulations from the Federal Communications Commission (FCC)—would necessarily put upward pricing pressure on the sector’s remaining revenue stream: subscriber fees. With bridging the so-called “digital divide” (i.e., building out broadband to rural and other unserved and underserved markets) a major focus of the recently enacted infrastructure spending package, it would be counterproductive to simultaneously take steps that would make Internet access more expensive and less accessible.

Even if the FTC were right that data collection for targeted advertising poses the risk of consumer harm, the report fails to justify why a regulatory scheme should apply solely to ISPs when they are such a small part of the digital advertising marketplace. Sector-specific regulation only makes sense if the FTC believes that ISPs are uniquely opaque among data collectors with respect to their collection practices.

Conclusion

The sector-specific approach implicitly endorsed by the 6(b) report would limit competition in the digital advertising market, even as there are already legal and regulatory inquiries into whether that market is sufficiently competitive. The report also fails to make the case the data collection for target advertising is inherently bad, or uniquely bad when done by an ISP.

There may or may not be cause for comprehensive federal privacy legislation, depending on whether it would pass cost-benefit analysis, but there is no reason to focus on ISPs alone. The FTC needs to go back to the drawing board.