Archives For facebook

The €390 million fine that the Irish Data Protection Commission (DPC) levied last week against Meta marks both the latest skirmish in the ongoing regulatory war on the use of data by private firms, as well as a major blow to the ad-driven business model that underlies most online services. 

More specifically, the DPC was forced by the European Data Protection Board (EDPB) to find that Meta violated the General Data Protection Regulation (GDPR) when it relied on its contractual relationship with Facebook and Instagram users as the basis to employ user data in personalized advertising. 

Meta still has other bases on which it can argue it relies in order to make use of user data, but a larger issue is at-play: the decision’s findings both that making use of user data for personalized advertising is not “necessary” between a service and its users and that privacy regulators are in a position to make such an assessment. 

More broadly, the case also underscores that there is no consensus within the European Union on the broad interpretation of the GDPR preferred by some national regulators and the EDPB.

The DPC Decision

The core disagreement between the DPC and Meta, on the one hand, and some other EU privacy regulators, on the other, is whether it is lawful for Meta to treat the use of user data for personalized advertising as “necessary for the performance of” the contract between Meta and its users. The Irish DPC accepted Meta’s arguments that the nature of Facebook and Instagram is such that it is necessary to process personal data this way. The EDPB took the opposite approach and used its powers under the GDPR to direct the DPC to issue a decision contrary to DPC’s own determination. Notably, the DPC announced that it is considering challenging the EDPB’s involvement before the EU Court of Justice as an unlawful overreach of the board’s powers.

In the EDPB’s view, it is possible for Meta to offer Facebook and Instagram without personalized advertising. And to the extent that this is possible, Meta cannot rely on the “necessity for the performance of a contract” basis for data processing under Article 6 of the GDPR. Instead, Meta in most cases should rely on the “consent” basis, involving an explicit “yes/no” choice. In other words, Facebook and Instagram users should be explicitly asked if they consent to their data being used for personalized advertising. If they decline, then under this rationale, they would be free to continue using the service without personalized advertising (but with, e.g., contextual advertising). 

Notably, the decision does not mandate a particular contractual basis for processing, but only invalidates “contractual necessity” for personalized advertising. Indeed, Meta believes it has other avenues for continuing to process user data for personalized advertising while not depending on a “consent” basis. Of course, only time will tell if this reasoning is accepted. Nonetheless, the EDBP’s underlying animus toward the “necessity” of personalized advertising remains concerning.

What Is ‘Necessary’ for a Service?

The EDPB’s position is of a piece with a growing campaign against firms’ use of data more generally. But as in similar complaints against data use, the demonstrated harms here are overstated, while the possibility that benefits might flow from the use of data is assumed to be zero. 

How does the EDPB know that it is not necessary for Meta to rely on personalized advertising? And what does “necessity” mean in this context? According to the EDPB’s own guidelines, a business “should be able to demonstrate how the main subject-matter of the specific contract with the data subject cannot, as a matter of fact, be performed if the specific processing of the personal data in question does not occur.” Therefore, if it is possible to distinguish various “elements of a service that can in fact reasonably be performed independently of one another,” then even if some processing of personal data is necessary for some elements, this cannot be used to bundle those with other elements and create a “take it or leave it” situation for users. The EDPB stressed that:

This assessment may reveal that certain processing activities are not necessary for the individual services requested by the data subject, but rather necessary for the controller’s wider business model.

This stilted view of what counts as a “service” completely fails to acknowledge that “necessary” must mean more than merely technologically possible. Any service offering faces both technical limitations as well as economic limitations. What is technically possible to offer can also be so uneconomic in some forms as to be practically impossible. Surely, there are alternatives to personalized advertising as a means to monetize social media, but determining what those are requires a great deal of careful analysis and experimentation. Moreover, the EDPB’s suggested “contextual advertising” alternative is not obviously superior to the status quo, nor has it been demonstrated to be economically viable at scale.  

Thus, even though it does not strictly follow from the guidelines, the decision in the Meta case suggests that, in practice, the EDPB pays little attention to the economic reality of a contractual relationship between service providers and their users, instead trying to carve out an artificial, formalistic approach. It is doubtful whether the EDPB engaged in the kind of robust economic analysis of Facebook and Instagram that would allow it to reach a conclusion as to whether those services are economically viable without the use of personalized advertising. 

However, there is a key institutional point to be made here. Privacy regulators are likely to be eminently unprepared to conduct this kind of analysis, which arguably should lead to significant deference to the observed choices of businesses and their customers.

Conclusion

A service’s use of its users’ personal data—whether for personalized advertising or other purposes—can be a problem, but it can also generate benefits. There is no shortcut to determine, in any given situation, whether the costs of a particular business model outweigh its benefits. Critically, the balance of costs and benefits from a business model’s technological and economic components is what truly determines whether any specific component is “necessary.” In the Meta decision, the EDPB got it wrong by refusing to incorporate the full economic and technological components of the company’s business model. 

“Just when I thought I was out, they pull me back in!” says Al Pacino’s character, Michael Corleone, in Godfather III. That’s how Facebook and Google must feel about S. 673, the Journalism Competition and Preservation Act (JCPA)

Gus Hurwitz called the bill dead in September. Then it passed the Senate Judiciary Committee. Now, there are some reports that suggest it could be added to the obviously unrelated National Defense Authorization Act (it should be noted that the JCPA was not included in the version of NDAA introduced in the U.S. House).

For an overview of the bill and its flaws, see Dirk Auer and Ben Sperry’s tl;dr. The JCPA would force “covered” online platforms like Facebook and Google to pay for journalism accessed through those platforms. When a user posts a news article on Facebook, which then drives traffic to the news source, Facebook would have to pay. I won’t get paid for links to my banger cat videos, no matter how popular they are, since I’m not a qualifying publication.

I’m going to focus on one aspect of the bill: the use of “final offer arbitration” (FOA) to settle disputes between platforms and news outlets. FOA is sometimes called “baseball arbitration” because it is used for contract disputes in Major League Baseball. This form of arbitration has also been implemented in other jurisdictions to govern similar disputes, notably by the Australian ACCC.

Before getting to the more complicated case, let’s start simple.

Scenario #1: I’m a corn farmer. You’re a granary who buys corn. We’re both invested in this industry, so let’s assume we can’t abandon negotiations in the near term and need to find an agreeable price. In a market, people make offers. Prices vary each year. I decide when to sell my corn based on prevailing market prices and my beliefs about when they will change.

Scenario #2: A government agency comes in (without either of us asking for it) and says the price of corn this year is $6 per bushel. In conventional economics, we call that a price regulation. Unlike a market price, where both sides sign off, regulated prices do not enjoy mutual agreement by the parties to the transaction.

Scenario #3:  Instead of a price imposed independently by regulation, one of the parties (say, the corn farmer) may seek a higher price of $6.50 per bushel and petition the government. The government agrees and the price is set at $6.50. We would still call that price regulation, but the outcome reflects what at least one of the parties wanted and  some may argue that it helps “the little guy.” (Let’s forget that many modern farms are large operations with bargaining power. In our head and in this story, the corn farmer is still a struggling mom-and-pop about to lose their house.)

Scenario #4: Instead of listening only to the corn farmer,  both the farmer and the granary tell the government their “final offer” and the government picks one of those offers, not somewhere in between. The parties don’t give any reasons—just the offer. This is called “final offer arbitration” (FOA). 

As an arbitration mechanism, FOA makes sense, even if it is not always ideal. It avoids some of the issues that can attend “splitting the difference” between the parties. 

While it is better than other systems, it is still a price regulation.  In the JCPA’s case, it would not be imposed immediately; the two parties can negotiate on their own (in the shadow of the imposed FOA). And the actual arbitration decision wouldn’t technically be made by the government, but by a third party. Fine. But ultimately, after stripping away the veneer,  this is all just an elaborate mechanism built atop the threat of the government choosing the price in the market. 

I call that price regulation. The losing party does not like the agreement and never agreed to the overall mechanism. Unlike in voluntary markets, at least one of the parties does not agree with the final price. Moreover, neither party explicitly chose the arbitration mechanism. 

The JCPA’s FOA system is not precisely like the baseball situation. In baseball, there is choice on the front-end. Players and owners agree to the system. In baseball, there is also choice after negotiations start. Players can still strike; owners can enact a lockout. Under the JCPA, the platforms must carry the content. They cannot walk away.

I’m an economist, not a philosopher. The problem with force is not that it is unpleasant. Instead, the issue is that force distorts the knowledge conveyed through market transactions. That distortion prevents resources from moving to their highest valued use. 

How do we know the apple is more valuable to Armen than it is to Ben? In a market, “we” don’t need to know. No benevolent outsider needs to pick the “right” price for other people. In most free markets, a seller posts a price. Buyers just need to decide whether they value it more than that price. Armen voluntarily pays Ben for the apple and Ben accepts the transaction. That’s how we know the apple is in the right hands.

Often, transactions are about more than just price. Sometimes there may be haggling and bargaining, especially on bigger purchases. Workers negotiate wages, even when the ad stipulates a specific wage. Home buyers make offers and negotiate. 

But this just kicks up the issue of information to one more level. Negotiating is costly. That is why sometimes, in anticipation of costly disputes down the road, the two sides voluntarily agree to use an arbitration mechanism. MLB players agree to baseball arbitration. That is the two sides revealing that they believe the costs of disputes outweigh the losses from arbitration. 

Again, each side conveys their beliefs and values by agreeing to the arbitration mechanism. Each step in the negotiation process allows the parties to convey the relevant information. No outsider needs to know “the right” answer.For a choice to convey information about relative values, it needs to be freely chosen.

At an abstract level, any trade has two parts. First, people agree to the mechanism, which determines who makes what kinds of offers. At the grocery store, the mechanism is “seller picks the price and buyer picks the quantity.” For buying and selling a house, the mechanism is “seller posts price, buyer can offer above or below and request other conditions.” After both parties agree to the terms, the mechanism plays out and both sides make or accept offers within the mechanism. 

We need choice on both aspects for the price to capture each side’s private information. 

For example, suppose someone comes up to you with a gun and says “give me your wallet or your watch. Your choice.” When you “choose” your watch, we don’t actually call that a choice, since you didn’t pick the mechanism. We have no way of knowing whether the watch means more to you or to the guy with the gun. 

When the JCPA forces Facebook to negotiate with a local news website and Facebook offers to pay a penny per visit, it conveys no information about the relative value that the news website is generating for Facebook. Facebook may just be worried that the website will ask for two pennies and the arbitrator will pick the higher price. It is equally plausible that in a world without transaction costs, the news would pay Facebook, since Facebook sends traffic to them. Is there any chance the arbitrator will pick Facebook’s offer if it asks to be paid? Of course not, so Facebook will never make that offer. 

For sure, things are imposed on us all the time. That is the nature of regulation. Energy prices are regulated. I’m not against regulation. But we should defend that use of force on its own terms and be honest that the system is one of price regulation. We gain nothing by a verbal sleight of hand that turns losing your watch into a “choice” and the JCPA’s FOA into a “negotiation” between platforms and news.

In economics, we often ask about market failures. In this case, is there a sufficient market failure in the market for links to justify regulation? Is that failure resolved by this imposition?

The Federal Trade Commission (FTC) wants to review in advance all future acquisitions by Facebook parent Meta Platforms. According to a Sept. 2 Bloomberg report, in connection with its challenge to Meta’s acquisition of fitness-app maker Within Unlimited,  the commission “has asked its in-house court to force both Meta and [Meta CEO Mark] Zuckerberg to seek approval from the FTC before engaging in any future deals.”

This latest FTC decision is inherently hyper-regulatory, anti-free market, and contrary to the rule of law. It also is profoundly anti-consumer.

Like other large digital-platform companies, Meta has conferred enormous benefits on consumers (net of payments to platforms) that are not reflected in gross domestic product statistics. In a December 2019 Harvard Business Review article, Erik Brynjolfsson and Avinash Collis reported research finding that Facebook:

…generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. … [I]ncluding the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017.

The acquisition of complementary digital assets—like the popular fitness app produced by Within—enables Meta to continually enhance the quality of its offerings to consumers and thereby expand consumer surplus. It reflects the benefits of economic specialization, as specialized assets are made available to enhance the quality of Meta’s offerings. Requiring Meta to develop complementary assets in-house, when that is less efficient than a targeted acquisition, denies these benefits.

Furthermore, in a recent editorial lambasting the FTC’s challenge to a Meta-Within merger as lacking a principled basis, the Wall Street Journal pointed out that the challenge also removes incentive for venture-capital investments in promising startups, a result at odds with free markets and innovation:

Venture capitalists often fund startups on the hope that they will be bought by larger companies. [FTC Chair Lina] Khan is setting down the marker that the FTC can block acquisitions merely to prevent big companies from getting bigger, even if they don’t reduce competition or harm consumers. This will chill investment and innovation, and it deserves a burial in court.

This is bad enough. But the commission’s proposal to require blanket preapprovals of all future Meta mergers (including tiny acquisitions well under regulatory pre-merger reporting thresholds) greatly compounds the harm from its latest ill-advised merger challenge. Indeed, it poses a blatant challenge to free-market principles and the rule of law, in at least three ways.

  1. It substitutes heavy-handed ex ante regulatory approval for a reliance on competition, with antitrust stepping in only in those limited instances where the hard facts indicate a transaction will be anticompetitive. Indeed, in one key sense, it is worse than traditional economic regulation. Empowering FTC staff to carry out case-by-case reviews of all proposed acquisitions inevitably will generate arbitrary decision-making, perhaps based on a variety of factors unrelated to traditional consumer-welfare-based antitrust. FTC leadership has abandoned sole reliance on consumer welfare as the touchstone of antitrust analysis, paving the wave for potentially abusive and arbitrary enforcement decisions. By contrast, statutorily based economic regulation, whatever its flaws, at least imposes specific standards that staff must apply when rendering regulatory determinations.
  2. By abandoning sole reliance on consumer-welfare analysis, FTC reviews of proposed Meta acquisitions may be expected to undermine the major welfare benefits that Meta has previously bestowed upon consumers. Given the untrammeled nature of these reviews, Meta may be expected to be more cautious in proposing transactions that could enhance consumer offerings. What’s more, the general anti-merger bias by current FTC leadership would undoubtedly prompt them to reject some, if not many, procompetitive transactions that would confer new benefits on consumers.
  3. Instituting a system of case-by-case assessment and approval of transactions is antithetical to the normal American reliance on free markets, featuring limited government intervention in market transactions based on specific statutory guidance. The proposed review system for Meta lacks statutory warrant and (as noted above) could promote arbitrary decision-making. As such, it seriously flouts the rule of law and threatens substantial economic harm (sadly consistent with other ill-considered initiatives by FTC Chair Khan, see here and here).

In sum, internet-based industries, and the big digital platforms, have thrived under a system of American technological freedom characterized as “permissionless innovation.” Under this system, the American people—consumers and producers—have been the winners.

The FTC’s efforts to micromanage future business decision-making by Meta, prompted by the challenge to a routine merger, would seriously harm welfare. To the extent that the FTC views such novel interventionism as a bureaucratic template applicable to other disfavored large companies, the American public would be the big-time loser.

The wave of populist antitrust that has been embraced by regulators and legislators in the United States, United Kingdom, European Union, and other jurisdictions rests on the assumption that currently dominant platforms occupy entrenched positions that only government intervention can dislodge. Following this view, Facebook will forever dominate social networking, Amazon will forever dominate cloud computing, Uber and Lyft will forever dominate ridesharing, and Amazon and Netflix will forever dominate streaming. This assumption of platform invincibility is so well-established that some policymakers advocate significant interventions without making any meaningful inquiry into whether a seemingly dominant platform actually exercises market power.

Yet this assumption is not supported by historical patterns in platform markets. It is true that network effects drive platform markets toward “winner-take-most” outcomes. But the winner is often toppled quickly and without much warning. There is no shortage of examples.

In 2007, a columnist in The Guardian observed that “it may already be too late for competitors to dislodge MySpace” and quoted an economist as authority for the proposition that “MySpace is well on the way to becoming … a natural monopoly.” About one year later, Facebook had overtaken MySpace “monopoly” in the social-networking market. Similarly, it was once thought that Blackberry would forever dominate the mobile-communications device market, eBay would always dominate the online e-commerce market, and AOL would always dominate the internet-service-portal market (a market that no longer even exists). The list of digital dinosaurs could go on.

All those tech leaders were challenged by entrants and descended into irrelevance (or reduced relevance, in eBay’s case). This occurred through the force of competition, not government intervention.

Why This Time is Probably Not Different

Given this long line of market precedents, current legislative and regulatory efforts to “restore” competition through extensive intervention in digital-platform markets require that we assume that “this time is different.” Just as that slogan has been repeatedly rebutted in the financial markets, so too is it likely to be rebutted in platform markets. 

There is already supporting evidence. 

In the cloud market, Amazon’s AWS now faces vigorous competition from Microsoft Azure and Google Cloud. In the streaming market, Amazon and Netflix face stiff competition from Disney+ and Apple TV+, just to name a few well-resourced rivals. In the social-networking market, Facebook now competes head-to-head with TikTok and seems to be losing. The market power once commonly attributed to leading food-delivery platforms such as Grubhub, UberEats, and DoorDash is implausible after persistent losses in most cases, and the continuous entry of new services into a rich variety of local and product-market niches.

Those who have advocated antitrust intervention on a fast-track schedule may remain unconvinced by these inconvenient facts. But the market is not. 

Investors have already recognized Netflix’s vulnerability to competition, as reflected by a 35% fall in its stock price on April 20 and a decline of more than 60% over the past 12 months. Meta, Facebook’s parent, also experienced a reappraisal, falling more than 26% on Feb. 3 and more than 35% in the past 12 months. Uber, the pioneer of the ridesharing market, has declined by almost 50% over the past 12 months, while Lyft, its principal rival, has lost more than 60% of its value. These price freefalls suggest that antitrust populists may be pursuing solutions to a problem that market forces are already starting to address.

The Forgotten Curse of the Incumbent

For some commentators, the sharp downturn in the fortunes of the so-called “Big Tech” firms would not come as a surprise.

It has long been observed by some scholars and courts that a dominant firm “carries the seeds of its own destruction”—a phrase used by then-professor and later-Judge Richard Posner, writing in the University of Chicago Law Review in 1971. The reason: a dominant firm is liable to exhibit high prices, mediocre quality, or lackluster innovation, which then invites entry by more adept challengers. However, this view has been dismissed as outdated in digital-platform markets, where incumbents are purportedly protected by network effects and switching costs that make it difficult for entrants to attract users. Depending on the set of assumptions selected by an economic modeler, each contingency is equally plausible in theory.

The plunging values of leading platforms supplies real-world evidence that favors the self-correction hypothesis. It is often overlooked that network effects can work in both directions, resulting in a precipitous fall from market leader to laggard. Once users start abandoning a dominant platform for a new competitor, network effects operating in reverse can cause a “run for the exits” that leaves the leader with little time to recover. Just ask Nokia, the world’s leading (and seemingly unbeatable) smartphone brand until the Apple iPhone came along.

Why Market Self-Correction Outperforms Regulatory Correction

Market self-correction inherently outperforms regulatory correction: it operates far more rapidly and relies on consumer preferences to reallocate market leadership—a result perfectly consistent with antitrust’s mission to preserve “competition on the merits.” In contrast, policymakers can misdiagnose the competitive effects of business practices; are susceptible to the influence of private interests (especially those that are unable to compete on the merits); and often mispredict the market’s future trajectory. For Exhibit A, see the protracted antitrust litigation by the U.S. Department against IBM, which started in 1975 and ended in withdrawal of the suit in 1982. Given the launch of the Apple II in 1977, the IBM PC in 1981, and the entry of multiple “PC clones,” the forces of creative destruction swiftly displaced IBM from market leadership in the computing industry.

Regulators and legislators around the world have emphasized the urgency of taking dramatic action to correct claimed market failures in digital environments, casting aside prudential concerns over the consequences if any such failure proves to be illusory or temporary. 

But the costs of regulatory failure can be significant and long-lasting. Markets must operate under unnecessary compliance burdens that are difficult to modify. Regulators’ enforcement resources are diverted, and businesses are barred from adopting practices that would benefit consumers. In particular, proposed breakup remedies advocated by some policymakers would undermine the scale economies that have enabled platforms to push down prices, an important consideration in a time of accelerating inflation.

Conclusion

The high concentration levels and certain business practices in digital-platform markets certainly raise important concerns as a matter of antitrust (as well as privacy, intellectual property, and other bodies of) law. These concerns merit scrutiny and may necessitate appropriately targeted interventions. Yet, any policy steps should be anchored in the factually grounded analysis that has characterized decades of regulatory and judicial action to implement the antitrust laws with appropriate care. Abandoning this nuanced framework for a blunt approach based on reflexive assumptions of market power is likely to undermine, rather than promote, the public interest in competitive markets.

Sens. Amy Klobuchar (D-Minn.) and Chuck Grassley (R-Iowa)—cosponsors of the American Innovation Online and Choice Act, which seeks to “rein in” tech companies like Apple, Google, Meta, and Amazon—contend that “everyone acknowledges the problems posed by dominant online platforms.”

In their framing, it is simply an acknowledged fact that U.S. antitrust law has not kept pace with developments in the digital sector, allowing a handful of Big Tech firms to exploit consumers and foreclose competitors from the market. To address the issue, the senators’ bill would bar “covered platforms” from engaging in a raft of conduct, including self-preferencing, tying, and limiting interoperability with competitors’ products.

That’s what makes the open letter to Congress published late last month by the usually staid American Bar Association’s (ABA) Antitrust Law Section so eye-opening. The letter is nothing short of a searing critique of the legislation, which the section finds to be poorly written, vague, and departing from established antitrust-law principles.

The ABA, of course, has a reputation as an independent, highly professional, and heterogenous group. The antitrust section’s membership includes not only in-house corporate counsel, but lawyers from nonprofits, consulting firms, federal and state agencies, judges, and legal academics. Given this context, the comments must be read as a high-level judgment that recent legislative and regulatory efforts to “discipline” tech fall outside the legal mainstream and would come at the cost of established antitrust principles, legal precedent, transparency, sound economic analysis, and ultimately consumer welfare.

The Antitrust Section’s Comments

As the ABA Antitrust Law Section observes:

The Section has long supported the evolution of antitrust law to keep pace with evolving circumstances, economic theory, and empirical evidence. Here, however, the Section is concerned that the Bill, as written, departs in some respects from accepted principles of competition law and in so doing risks causing unpredicted and unintended consequences.

Broadly speaking, the section’s criticisms fall into two interrelated categories. The first relates to deviations from antitrust orthodoxy and the principles that guide enforcement. The second is a critique of the AICOA’s overly broad language and ambiguous terminology.

Departing from established antitrust-law principles

Substantively, the overarching concern expressed by the ABA Antitrust Law Section is that AICOA departs from the traditional role of antitrust law, which is to protect the competitive process, rather than choosing to favor some competitors at the expense of others. Indeed, the section’s open letter observes that, out of the 10 categories of prohibited conduct spelled out in the legislation, only three require a “material harm to competition.”

Take, for instance, the prohibition on “discriminatory” conduct. As it stands, the bill’s language does not require a showing of harm to the competitive process. It instead appears to enshrine a freestanding prohibition of discrimination. The bill targets tying practices that are already prohibited by U.S. antitrust law, but while similarly eschewing the traditional required showings of market power and harm to the competitive process. The same can be said, mutatis mutandis, for “self-preferencing” and the “unfair” treatment of competitors.

The problem, the section’s letter to Congress argues, is not only that this increases the teleological chasm between AICOA and the overarching goals and principles of antitrust law, but that it can also easily lead to harmful unintended consequences. For instance, as the ABA Antitrust Law Section previously observed in comments to the Australian Competition and Consumer Commission, a prohibition of pricing discrimination can limit the extent of discounting generally. Similarly, self-preferencing conduct on a platform can be welfare-enhancing, while forced interoperability—which is also contemplated by AICOA—can increase prices for consumers and dampen incentives to innovate. Furthermore, some of these blanket prohibitions are arguably at loggerheads with established antitrust doctrine, such as in, e.g., Trinko, which established that even monopolists are generally free to decide with whom they will deal.

Arguably, the reason why the Klobuchar-Grassley bill can so seamlessly exclude or redraw such a central element of antitrust law as competitive harm is because it deliberately chooses to ignore another, preceding one. Namely, the bill omits market power as a requirement for a finding of infringement or for the legislation’s equally crucial designation as a “covered platform.” It instead prescribes size metrics—number of users, market capitalization—to define which platforms are subject to intervention. Such definitions cast an overly wide net that can potentially capture consumer-facing conduct that doesn’t have the potential to harm competition at all.

It is precisely for this reason that existing antitrust laws are tethered to market power—i.e., because it long has been recognized that only companies with market power can harm competition. As John B. Kirkwood of Seattle University School of Law has written:

Market power’s pivotal role is clear…This concept is central to antitrust because it distinguishes firms that can harm competition and consumers from those that cannot.

In response to the above, the ABA Antitrust Law Section (reasonably) urges Congress explicitly to require an effects-based showing of harm to the competitive process as a prerequisite for all 10 of the infringements contemplated in the AICOA. This also means disclaiming generalized prohibitions of “discrimination” and of “unfairness” and replacing blanket prohibitions (such as the one for self-preferencing) with measured case-by-case analysis.

Opaque language for opaque ideas

Another underlying issue is that the Klobuchar-Grassley bill is shot through with indeterminate language and fuzzy concepts that have no clear limiting principles. For instance, in order either to establish liability or to mount a successful defense to an alleged violation, the bill relies heavily on inherently amorphous terms such as “fairness,” “preferencing,” and “materiality,” or the “intrinsic” value of a product. But as the ABA Antitrust Law Section letter rightly observes, these concepts are not defined in the bill, nor by existing antitrust case law. As such, they inject variability and indeterminacy into how the legislation would be administered.

Moreover, it is also unclear how some incommensurable concepts will be weighed against each other. For example, how would concerns about safety and security be weighed against prohibitions on self-preferencing or requirements for interoperability? What is a “core function” and when would the law determine it has been sufficiently “enhanced” or “maintained”—requirements the law sets out to exempt certain otherwise prohibited behavior? The lack of linguistic and conceptual clarity not only explodes legal certainty, but also invites judicial second-guessing into the operation of business decisions, something against which the U.S. Supreme Court has long warned.

Finally, the bill’s choice of language and recent amendments to its terminology seem to confirm the dynamic discussed in the previous section. Most notably, the latest version of AICOA replaces earlier language invoking “harm to the competitive process” with “material harm to competition.” As the ABA Antitrust Law Section observes, this “suggests a shift away from protecting the competitive process towards protecting individual competitors.” Indeed, “material harm to competition” deviates from established categories such as “undue restraint of trade” or “substantial lessening of competition,” which have a clear focus on the competitive process. As a result, it is not unreasonable to expect that the new terminology might be interpreted as meaning that the actionable standard is material harm to competitors.

In its letter, the antitrust section urges Congress not only to define more clearly the novel terminology used in the bill, but also to do so in a manner consistent with existing antitrust law. Indeed:

The Section further recommends that these definitions direct attention to analysis consistent with antitrust principles: effects-based inquiries concerned with harm to the competitive process, not merely harm to particular competitors

Conclusion

The AICOA is a poorly written, misguided, and rushed piece of regulation that contravenes both basic antitrust-law principles and mainstream economic insights in the pursuit of a pre-established populist political goal: punishing the success of tech companies. If left uncorrected by Congress, these mistakes could have potentially far-reaching consequences for innovation in digital markets and for consumer welfare. They could also set antitrust law on a regressive course back toward a policy of picking winners and losers.

The following post was authored by counsel with White & Case LLP, who represented the International Center for Law & Economics (ICLE) in an amicus brief filed on behalf of itself and 12 distinguished law & economics scholars with the U.S. Court of Appeals for the D.C. Circuit in support of affirming U.S. District Court Judge James Boasberg’s dismissal of various States Attorneys General’s antitrust case brought against Facebook (now, Meta Platforms).

Introduction

The States brought an antitrust complaint against Facebook alleging that various conduct violated Section 2 of the Sherman Act. The ICLE brief addresses the States’ allegations that Facebook refused to provide access to an input, a set of application-programming interfaces that developers use in order to access Facebook’s network of social-media users (Facebook’s Platform), in order to prevent those third parties from using that access to export Facebook data to competitors or to compete directly with Facebook.

Judge Boasberg dismissed the States’ case without leave to amend, relying on recent Supreme Court precedent, including Trinko and Linkline, on refusals to deal. The Supreme Court strongly disfavors forced sharing, as shown by its decisions that recognize very few exceptions to the ability of firms to deal with whom they choose. Most notably, Aspen Skiing Co. v. Aspen Highlands Skiing is a 1985 decision recognizing an exception to the general rule that firms may deal with whom they want that was limited, though not expressly overturned, by Trinko in 2004. The States appealed to the D.C. Circuit on several grounds, including by relying on Aspen Skiing, and advocating for a broader view of refusals to deal than dictated by current jurisprudence. 

ICLE’s brief addresses whether the District Court was correct to dismiss the States’ allegations that Facebook’s Platform policies violated Section 2 of the Sherman Act in light of the voluminous body of precedent and scholarship concerning refusals to deal. ICLE’s brief argues that Judge Boasberg’s opinion is consistent with economic and legal principles, allowing firms to choose with whom they deal. Furthermore, the States’ allegations did not make out a claim under Aspen Skiing, which sets forth extremely narrow circumstances that may constitute an improper refusal to deal.  Finally, ICLE takes issue with the States’ attempt to create an amorphous legal standard for refusals to deal or otherwise shoehorn their allegations into a “conditional dealing” framework.

Economic Actors Should Be Able to Choose Their Business Partners

ICLE’s basic premise is that firms in a free-market system should be able to choose their business partners. Forcing firms to enter into certain business relationships can have the effect of stifling innovation, because the firm getting the benefit of the forced dealing then lacks incentive to create their own inputs. On the other side of the forced dealing, the owner would have reduced incentives to continue to innovate, invest, or create intellectual property. Forced dealing, therefore, has an adverse effect on the fundamental nature of competition. As the Supreme Court stated in Trinko, this compelled sharing creates “tension with the underlying purpose of antitrust law, since it may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” 

Courts Are Ill-Equipped to Regulate the Kind of Forced Sharing Advocated by the States

ICLE also notes the inherent difficulties of a court’s assessing forced access and the substantial risk of error that could create harm to competition. This risk, ICLE notes, is not merely theoretical and would require the court to scrutinize intricate details of a dynamic industry and determine which decisions are lawful or not. Take the facts of New York v. Facebook: more than 10 million apps and websites had access to Platform during the relevant period and the States took issue with only seven instances where Facebook had allegedly improperly prevented access to Platform. Assessing whether conduct would create efficiency in one circumstance versus another is challenging at best and always risky. As Frank Easterbook wrote: “Anyone who thinks that judges would be good at detecting the few situations in which cooperation would do more good than harm has not studied the history of antitrust.”

Even assuming a court has rightly identified a potentially anticompetitive refusal to deal, it would then be put to the task of remedying it. But imposing a remedy, and in effect assuming the role of a regulator, is similarly complicated. This is particularly true in dynamic, quickly evolving industries, such as social media. This concern is highlighted by the broad injunction the States seek in this case: to “enjoin[] and restrain [Facebook] from continuing to engage in any anticompetitive conduct and from adopting in the future any practice, plan, program, or device having a similar purpose or effect to the anticompetitive actions set forth above.”  Such a remedy would impose conditions on Facebook’s dealings with competitors for years to come—regardless of how the industry evolves.

Courts Should Not Expand Refusal-to-Deal Analysis Beyond the Narrow Circumstances of Aspen Skiing

In light of the principles above, the Supreme Court, as stated in Trinko, “ha[s] been very cautious in recognizing [refusal-to-deal] exceptions, because of the uncertain virtue of forced sharing and the difficulty of identifying and remedying anticompetitive conduct by a single firm.” Various scholars (e.g., Carlton, Meese, Lopatka, Epstein) have analyzed Aspen Skiing consistently with Trinko as, at most, “at or near the boundary of § 2 liability.”

So is a refusal-to-deal claim ever viable?  ICLE argues that refusal-to-deal claims have been rare (rightly so) and, at most, should only go forward under the delineated circumstances in Aspen Skiing. ICLE sets forth the 10th U.S. Circuit’s framework in Novell, which makes clear that “the monopolist’s conduct must be irrational but for its anticompetitive effect.”

  • First, “there must be a preexisting voluntary and presumably profitable course of dealing between the monopolist and rival.”
  • Second, “the monopolist’s discontinuation of the preexisting course of dealing must suggest a willingness to forsake short-term profits to achieve an anti-competitive end.”
  • Finally, even if these two factors are present, the court recognized that “firms routinely sacrifice short-term profits for lots of legitimate reasons that enhance consumer welfare.”

The States seek to broaden Aspen Skiing in order to sinisterize Facebook’s Platform policies, but the facts do not fit. The States do not plead an about-face with respect to Facebook’s Platform policies; the States do not allege that Facebook’s changes to its policies were irrational (particularly in light of the dynamic industry in which Facebook operates); and the States do not allege that Facebook engaged in less efficient behavior with the goal of hurting rivals. Indeed, Facebook changed its policies to retain users—which is essential to its business model (and therefore, rational).

The States try to evade these requirements by arguing for a looser refusal-to-deal standard (and by trying to shoehorn the conduct as “conditional dealing”)—but as ICLE explains, allowing such a claim to go forward would fly in the face of the economic and policy goals upheld by the current jurisprudence. 

Conclusion

The District Court was correct to dismiss the States’ allegations concerning Facebook’s Platform policies. Allowing a claim against Facebook to progress under the circumstances alleged in the States’ complaint would violate the principle that a firm, even one that is a monopolist, should not be held liable for refusing to deal with a certain business partner. The District Court’s decision is in line with key economic principles concerning refusals to deal and consistent with the Supreme Court’s decision in Aspen Skiing. Aspen Skiing is properly read to severely limit the circumstances giving rise to a refusal-to-deal claim, or else risk adverse effects such as reduced incentive to innovate.  

Amici Scholars Signing on to the Brief

(The ICLE brief presents the views of the individual signers listed below. Institutions are listed for identification purposes only.)

Henry Butler
Henry G. Manne Chair in Law and Economics and Executive Director of the Law & Economics Center, Scalia Law School
Daniel Lyons
Professor of Law, Boston College Law School
Richard A. Epstein
Laurence A. Tisch Professor of Law at NY School of Law, the Peter and Kirsten Bedford Senior Lecturer at the Hoover Institution, and the James Parker Hall Distinguished Service Professor Emeritus
Geoffrey A. Manne
President and Founder, International Center for Law & Economics, Distinguished Fellow Northwestern University Center on Law, Business & Economics
Thomas Hazlett
H.H. Macaulay Endowed Professor of Economics and Director of the Information Economy Project, Clemson University
Alan J. Meese
Ball Professor of Law, Co-Director, Center for the Study of Law and Markets, William & Mary Law School
Justin (Gus) Hurwitz
Professor of Law and Menard Director of the Nebraska Governance and Technology Center, University of Nebraska College of Law
Paul H. Rubin
Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
Jonathan Klick
Charles A. Heimbold, Jr. Professor of Law, University of Pennsylvania Carey School of Law; Erasmus Chair of Empirical Legal Studies, Erasmus University Rotterdam
Michael Sykuta
Associate Professor of Economics and Executive Director of Financial Research Institute, University of Missouri Division of Applied Social Sciences
Thomas A. Lambert
Wall Chair in Corporate Law and Governance, University of Missouri Law School
John Yun
Associate Professor of Law and Deputy Executive Director of the Global Antitrust Institute, Scalia Law School

A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.

It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:

How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?

Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).

When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.

As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.

The Shaky Foundations of Attention Markets Theory

Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.

  • First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
  • Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
  • Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).

There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:

This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”

Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:

But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.

Tim Wu makes roughly the same argument:

The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.

The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.

None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.

First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.

This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.

This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.

Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.

This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.

Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:

An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.

In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.

In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.

The Bait and Switch: Qualitative Indicia

These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:

Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method

Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.

This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”

This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.

A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences. 

There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching. 

The Way Forward

The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.

As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.

Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.

Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:

The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.

Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.

In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.

During the exceptional rise in stock-market valuations from March 2020 to January 2022, both equity investors and antitrust regulators have implicitly agreed that so-called “Big Tech” firms enjoyed unbeatable competitive advantages as gatekeepers with largely unmitigated power over the digital ecosystem.

Investors bid up the value of tech stocks to exceptional levels, anticipating no competitive threat to incumbent platforms. Antitrust enforcers and some legislators have exhibited belief in the same underlying assumption. In their case, it has spurred advocacy of dramatic remedies—including breaking up the Big Tech platforms—as necessary interventions to restore competition. 

Other voices in the antitrust community have been more circumspect. A key reason is the theory of contestable markets, developed in the 1980s by the late William Baumol and other economists, which holds that even extremely large market shares are at best a potential indicator of market power. To illustrate, consider the extreme case of a market occupied by a single firm. Intuitively, the firm would appear to have unqualified pricing power. Not so fast, say contestable market theorists. Suppose entry costs into the market are low and consumers can easily move to other providers. This means that the apparent monopolist will act as if the market is populated by other competitors. The takeaway: market share alone cannot demonstrate market power without evidence of sufficiently strong barriers to market entry.

While regulators and some legislators have overlooked this inconvenient principle, it appears the market has not. To illustrate, look no further than the Feb. 3 $230 billion crash in the market value of Meta Platforms—parent company of Facebook, Instagram, and WhatsApp, among other services.

In its antitrust suit against Meta, the Federal Trade Commission (FTC) has argued that Meta’s Facebook service enjoys a social-networking monopoly, a contention that the judge in the case initially rejected in June 2021 as so lacking in factual support that the suit was provisionally dismissed. The judge’s ruling (which he withdrew last month, allowing the suit to go forward after the FTC submitted a revised complaint) has been portrayed as evidence for the view that existing antitrust law sets overly demanding evidentiary standards that unfairly shelter corporate defendants. 

Yet, the record-setting single-day loss in Meta’s value suggests the evidentiary standard is set just about right and the judge’s skepticism was fully warranted. Consider one of the principal reasons behind Meta’s plunge in value: its service had suffered substantial losses of users to TikTok, a formidable rival in a social-networking market in which the FTC claims that Facebook faces no serious competition. The market begs to differ. In light of the obvious competitive threat posed by TikTok and other services, investors reassessed Facebook’s staying power, which was then reflected in its owner Meta’s downgraded stock price.

Just as the investment bubble that had supported the stock market’s case for Meta has popped, so too must the regulatory bubble that had supported the FTC’s antitrust case against it. Investors’ reevaluation rebuts the FTC’s strained market definition that had implausibly excluded TikTok as a competitor.

Even more fundamentally, the market’s assessment shows that Facebook’s users face nominal switching costs—in which case, its leadership position is contestable and the Facebook “monopoly” is not much of a monopoly. While this conclusion might seem surprising, Facebook’s vulnerability is hardly exceptional: Nokia, Blackberry, AOL, Yahoo, Netscape, and PalmPilot illustrate how often seemingly unbeatable tech leaders have been toppled with remarkable speed.

The unraveling of the FTC’s case against what would appear to be an obviously dominant platform should be a wake-up call for those policymakers who have embraced populist antitrust’s view that existing evidentiary requirements, which minimize the risk of “false positive” findings of anticompetitive conduct, should be set aside as an inconvenient obstacle to regulatory and judicial intervention. 

None of this should be interpreted to deny that concentration levels in certain digital markets raise significant antitrust concerns that merit close scrutiny. In particular, regulators have overlooked how some leading platforms have devalued intellectual-property rights in a manner that distorts technology and content markets by advantaging firms that operate integrated product and service ecosystems while disadvantaging firms that specialize in supplying the technological and creative inputs on which those ecosystems rely.  

The fundamental point is that potential risks to competition posed by any leading platform’s business practices can be assessed through rigorous fact-based application of the existing toolkit of antitrust analysis. This is critical to evaluate whether a given firm likely occupies a transitory, rather than durable, leadership position. The plunge in Meta’s stock in response to a revealed competitive threat illustrates the perils of discarding that surgical toolkit in favor of a blunt “big is bad” principle.

Contrary to what has become an increasingly common narrative in policy discussions and political commentary, the existing framework of antitrust analysis was not designed by scholars strategically acting to protect “big business.” Rather, this framework was designed and refined by scholars dedicated to rationalizing, through the rigorous application of economic principles, an incoherent body of case law that had often harmed consumers by shielding incumbents against threats posed by more efficient rivals. The legal shortcuts being pursued by antitrust populists to detour around appropriately demanding evidentiary requirements are writing a “back to the future” script that threatens to return antitrust law to that unfortunate predicament.

Antitrust policymakers around the world have taken a page out of the Silicon Valley playbook and decided to “move fast and break things.” While the slogan is certainly catchy, applying it to the policymaking world is unfortunate and, ultimately, threatens to harm consumers.

Several antitrust authorities in recent months have announced their intention to block (or, at least, challenge) a spate of mergers that, under normal circumstances, would warrant only limited scrutiny and face little prospect of outright prohibition. This is notably the case of several vertical mergers, as well as mergers between firms that are only potential competitors (sometimes framed as “killer acquisitions”). These include Facebook’s acquisition of Giphy (U.K.); Nvidia’s ARM Ltd. deal (U.S., EU, and U.K.), and Illumina’s purchase of GRAIL (EU). It is also the case for horizontal mergers in non-concentrated markets, such as WarnerMedia’s proposed merger with Discovery, which has faced significant political backlash.

Some of these deals fail even to implicate “traditional” merger-notification thresholds. Facebook’s purchase of Giphy was only notifiable because of the U.K. Competition and Markets Authority’s broad interpretation of its “share of supply test” (which eschews traditional revenue thresholds). Likewise, the European Commission relied on a highly controversial interpretation of the so-called “Article 22 referral” procedure in order to review Illumina’s GRAIL purchase.

Some have praised these interventions, claiming antitrust authorities should take their chances and prosecute high-profile deals. It certainly appears that authorities are pressing their luck because they face few penalties for wrongful prosecutions. Overly aggressive merger enforcement might even reinforce their bargaining position in subsequent cases. In other words, enforcers risk imposing social costs on firms and consumers because their incentives to prosecute mergers are not aligned with those of society as a whole.

None of this should come as a surprise to anyone who has been following this space. As my ICLE colleagues and I have been arguing for quite a while, weakening the guardrails that surround merger-review proceedings opens the door to arbitrary interventions that are difficult (though certainly not impossible) to remediate before courts.

The negotiations that surround merger-review proceedings involve firms and authorities bargaining in the shadow of potential litigation. Whether and which concessions are made will depend chiefly on what the parties believe will be the outcome of litigation. If firms think courts will safeguard their merger, they will offer authorities few potential remedies. Conversely, if authorities believe courts will support their decision to block a merger, they are unlikely to accept concessions that stop short of the parties withdrawing their deal.

This simplified model suggests that neither enforcers nor merging parties are in position to “exploit” the merger-review process, so long as courts review decisions effectively. Under this model, overly aggressive enforcement would merely lead to defeat in court (and, expecting this, merging parties would offer few concessions to authorities).

Put differently, court proceedings are both a dispute-resolution mechanism and a source of rulemaking. The result is that only marginal cases should lead to actual disputes. Most harmful mergers will be deterred, and clearly beneficial ones will be cleared rapidly. So long as courts apply the consumer welfare standard consistently, firms’ merger decisions—along with any rulings or remedies—all should primarily serve consumers’ interests.

At least, that is the theory. But there are factors that can serve to undermine this efficient outcome. In the field of merger control, this is notably the case with court delays that prevent parties from effectively challenging merger decisions.

While delays between when a legal claim is filed and a judgment is rendered aren’t always detrimental (as Richard Posner observes, speed can be costly), it is essential that these delays be accounted for in any subsequent damages and penalties. Parties that prevail in court might otherwise only obtain reparations that are below the market rate, reducing the incentive to seek judicial review in the first place.

The problem is particularly acute when it comes to merger reviews. Merger challenges might lead the parties to abandon a deal because they estimate the transaction will no longer be commercially viable by the time courts have decided the matter. This is a problem, insofar as neither U.S. nor EU antitrust law generally requires authorities to compensate parties for wrongful merger decisions. For example, courts in the EU have declined to fully compensate aggrieved companies (e.g., the CFI in Schneider) and have set an exceedingly high bar for such claims to succeed at all.

In short, parties have little incentive to challenge merger decisions if the only positive outcome is for their deals to be posthumously sanctified. This smaller incentive to litigate may be insufficient to create enough cases that would potentially helpful precedent for future merging firms. Ultimately, the balance of bargaining power is tilted in favor of competition authorities.

Some Data on Mergers

While not necessarily dispositive, there is qualitative evidence to suggest that parties often drop their deals when authorities either block them (as in the EU) or challenge them in court (in the United States).

U.S. merging parties nearly always either reach a settlement or scrap their deal when their merger is challenged. There were 43 transactions challenged by either the U.S. Justice Department (15) or the Federal Trade Commission (28) in 2020. Of these, 15 were abandoned and almost all the remaining cases led to settlements.

The EU picture is similar. The European Commission blocks, on average, about one merger every year (30 over the last 31 years). Most in-depth investigations are settled in exchange for remedies offered by the merging firms (141 out of 239). While the EU does not publish detailed statistics concerning abandoned mergers, it is rare for firms to appeal merger-prohibition decisions. The European Court of Justice’s database lists only six such appeals over a similar timespan. The vast majority of blocked mergers are scrapped, with the parties declining to appeal.

This proclivity to abandon mergers is surprising, given firms’ high success rate in court. Of the six merger-annulment appeals in the ECJ’s database (CK Hutchison Holdings Ltd.’s acquisition of Telefónica Europe Plc; Ryanair’s acquisition of a controlling stake in Aer Lingus; a proposed merger between Deutsche Börse and NYSE Euronext; Tetra Laval’s takeover of Sidel Group; a merger between Schneider Electric SA and Legrand SA; and Airtours’ acquisition of First Choice) merging firms won four of them. While precise numbers are harder to come by in the United States, it is also reportedly rare for U.S. antitrust enforcers to win merger-challenge cases.

One explanation is that only marginal cases ever make it to court. In other words, firms with weak cases are, all else being equal, less likely to litigate. However, that is unlikely to explain all abandoned deals.

There are documented cases in which it was clearly delays, rather than self-selection, that caused firms to scrap planned mergers. In the EU’s Airtours proceedings, the merging parties dropped their transaction even though they went on to prevail in court (and First Choice, the target firm, was acquired by another rival). This is inconsistent with the notion that proposed mergers are abandoned only when the parties have a weak case to challenge (the Commission’s decision was widely seen as controversial).

Antitrust policymakers also generally acknowledge that mergers are often time-sensitive. That’s why merger rules on both sides of the Atlantic tend to impose strict timelines within which antitrust authorities must review deals.

In the end, if self-selection based on case strength were the only criteria merging firms used in deciding to appeal a merger challenge, one would not expect an equilibrium in which firms prevail in more than two-thirds of cases. If firms anticipated that a successful court case would preserve a multi-billion dollar merger, the relatively small burden of legal fees should not dissuade them from litigating, even if their chance of success was tiny. We would expect to see more firms losing in court.

The upshot is that antitrust challenges and prohibition decisions likely cause at least some firms to abandon their deals because court proceedings are not seen as an effective remedy. This perception, in turn, reinforces authorities’ bargaining position and thus encourages firms to offer excessive remedies in hopes of staving off lengthy litigation.

Conclusion

A general rule of policymaking is that rules should seek to ensure that agents internalize both the positive and negative effects of their decisions. This, in turn, should ensure that they behave efficiently.

In the field of merger control, those incentives are misaligned. Given the prevailing political climate on both sides of the Atlantic, challenging large corporate acquisitions likely generates important political capital for antitrust authorities. But wrongful merger prohibitions are unlikely to elicit the kinds of judicial rebukes that would compel authorities to proceed more carefully.

Put differently, in the field of antitrust law, court proceedings ought to serve as a guardrail to ensure that enforcement decisions ultimately benefit consumers. When that shield is removed, it is no longer a given that authorities—who, in theory, act as agents of society—will act in the best interests of that society, rather than maximize their own preferences.

Ideally, we should ensure that antitrust authorities bear the social costs of faulty decisions, by compensating, at least, the direct victims of their actions (i.e., the merging firms). However, this would likely require new legislation to that effect, as there currently are too many obstacles to such cases. It is thus unlikely to represent a short-term solution.

In the meantime, regulatory restraint appears to be the only realistic solution. Or, one might say, authorities should “move carefully and avoid breaking stuff.”

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.

The European Commission plans to require Apple to use USB-C charging ports on iPhones to allow interoperability among different chargers (to save, the Commission estimates, two grams of waste per-European per-year). Interoperability demands for forms of interoperability have been at the center of at least two major lawsuits: Epic’s case against Apple and a separate lawsuit against Apple by the app called Coronavirus Reporter. In July, a group of pro-intervention academics published a white paper calling interoperability “the ‘Super Tool’ of Digital Platform Governance.”

What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.

Why Isn’t Everything Interoperable?

The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.

And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.

But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.

The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.

Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.

Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.

There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.

In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).

But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.

Interoperability for Digital Platforms

Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.

It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.

A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.

Interoperability and Contact-Tracing Apps

A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.

No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.

In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.

It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.

A ‘Super Tool’ for Digital Market Intervention?

The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.

The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.

The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.

Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.

Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”

But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “​​the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.

Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.

The authors completely ignore that a smart home interoperability standard has already been developed, backed by a group of 170 companies that include Amazon, Apple, and Google, as well as SmartThings, IKEA, and Samsung. It is open source and, in principle, should allow a Google Home speaker to work with, say, an Amazon Ring doorbell. In markets where consumers really do want interoperability, it can emerge without a regulator requiring it, even if some companies have apparent incentive not to offer it.

If You Build It, They Still Might Not Come

Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.

In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?

None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.

The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated.