In the world of video games, the process by which players train themselves or their characters in order to overcome a difficult “boss battle” is called “leveling up.” I find that the phrase also serves as a useful metaphor in the context of corporate mergers. Here, “leveling up” can be thought of as acquiring another firm in order to enter or reinforce one’s presence in an adjacent market where a larger and more successful incumbent is already active.
In video-game terminology, that incumbent would be the “boss.” Acquiring firms choose to level up when they recognize that building internal capacity to compete with the “boss” is too slow, too expensive, or is simply infeasible. An acquisition thus becomes the only way “to beat the boss” (or, at least, to maximize the odds of doing so).
Alas, this behavior is often mischaracterized as a “killer acquisition” or “reverse killer acquisition.” What separates leveling up from killer acquisitions is that the former serve to turn the merged entity into a more powerful competitor, while the latter attempt to weaken competition. In the case of “reverse killer acquisitions,” the assumption is that the acquiring firm would have entered the adjacent market regardless absent the merger, leaving even more firms competing in that market.
In other words, the distinction ultimately boils down to a simple (though hard to answer) question: could both the acquiring and target firms have effectively competed with the “boss” without a merger?
Because they are ubiquitous in the tech sector, these mergers—sometimes also referred to as acquisitions of nascent competitors—have drawn tremendous attention from antitrust authorities and policymakers. All too often, policymakers fail to adequately consider the realistic counterfactual to a merger and mistake leveling up for a killer acquisition. The most recent high-profile example is Meta’s acquisition of the virtual-reality fitness app Within. But in what may be a hopeful sign of a turning of the tide, a federal court appears set to clear that deal over objections from the Federal Trade Commission (FTC).
Some Recent ‘Boss Battles’
The canonical example of leveling up in tech markets is likely Google’s acquisition of Android back in 2005. While Apple had not yet launched the iPhone, it was already clear by 2005 that mobile would become an important way to access the internet (including Google’s search services). Rumors were swirling that Apple, following its tremendously successful iPod, had started developing a phone, and Microsoft had been working on Windows Mobile for a long time.
In short, there was a serious risk that Google would be reliant on a single mobile gatekeeper (i.e., Apple) if it did not move quickly into mobile. Purchasing Android was seen as the best way to do so. (Indeed, averting an analogous sort of threat appears to be driving Meta’s move into virtual reality today.)
The natural next question is whether Google or Android could have succeeded in the mobile market absent the merger. My guess is that the answer is no. In 2005, Google did not produce any consumer hardware. Quickly and successfully making the leap would have been daunting. As for Android:
Google had significant advantages that helped it to make demands from carriers and OEMs that Android would not have been able to make. In other words, Google was uniquely situated to solve the collective action problem stemming from OEMs’ desire to modify Android according to their own idiosyncratic preferences. It used the appeal of its app bundle as leverage to get OEMs and carriers to commit to support Android devices for longer with OS updates. The popularity of its apps meant that OEMs and carriers would have great difficulty in going it alone without them, and so had to engage in some contractual arrangements with Google to sell Android phones that customers wanted. Google was better resourced than Android likely would have been and may have been able to hold out for better terms with a more recognizable and desirable brand name than a hypothetical Google-less Android. In short, though it is of course possible that Android could have succeeded despite the deal having been blocked, it is also plausible that Android became so successful only because of its combination with Google. (citations omitted)
In short, everything suggests that Google’s purchase of Android was a good example of leveling up. Note that much the same could be said about the company’s decision to purchase Fitbit in order to compete against Apple and its Apple Watch (which quickly dominated the market after its launch in 2015).
A more recent example of leveling up is Microsoft’s planned acquisition of Activision Blizzard. In this case, the merger appears to be about improving Microsoft’s competitive position in the platform market for game consoles, rather than in the adjacent market for games.
At the time of writing, Microsoft is staring down the barrel of a gun: Sony is on the cusp of becoming the runaway winner of yet another console generation. Microsoft’s executives appear to have concluded that this is partly due to a lack of exclusive titles on the Xbox platform. Hence, they are seeking to purchase Activision Blizzard, one of the most successful game studios, known among other things for its acclaimed Call of Duty series.
Again, the question is whether Microsoft could challenge Sony by improving its internal game-publishing branch (known as Xbox Game Studios) or whether it needs to acquire a whole new division. This is obviously a hard question to answer, but a cursory glance at the titles shipped by Microsoft’s publishing studio suggest that the issues it faces could not simply be resolved by throwing more money at its existing capacities. Indeed, Microsoft Game Studios seems to be plagued by organizational failings that might only be solved by creating more competition within the Microsoft company. As one gaming journalist summarized:
The current predicament of these titles goes beyond the amount of money invested or the buzzwords used to market them – it’s about Microsoft’s plan to effectively manage its studios. Encouraging independence isn’t an excuse for such a blatantly hands-off approach which allows titles to fester for years in development hell, with some fostering mistreatment to occur. On the surface, it’s just baffling how a company that’s been ranked as one of the top 10 most reputable companies eight times in 11 years (as per RepTrak) could have such problems with its gaming division.
The upshot is that Microsoft appears to have recognized that its own game-development branch is failing, and that acquiring a well-functioning rival is the only way to rapidly compete with Sony. There is thus a strong case to be made that competition authorities and courts should approach the merger with caution, as it has at least the potential to significantly increase competition in the game-console industry.
Finally, leveling up is sometimes a way for smaller firms to try and move faster than incumbents into a burgeoning and promising segment. The best example of this is arguably Meta’s effort to acquire Within, a developer of VR fitness apps. Rather than being an attempt to thwart competition from a competitor in the VR app market, the goal of the merger appears to be to compete with the likes of Google, Apple, and Sony at the platform level. As Mark Zuckerberg wrote back in 2015, when Meta’s VR/AR strategy was still in its infancy:
Our vision is that VR/AR will be the next major computing platform after mobile in about 10 years… The strategic goal is clearest. We are vulnerable on mobile to Google and Apple because they make major mobile platforms. We would like a stronger strategic position in the next wave of computing….
Over the next few years, we’re going to need to make major new investments in apps, platform services, development / graphics and AR. Some of these will be acquisitions and some can be built in house. If we try to build them all in house from scratch, then we risk that several will take too long or fail and put our overall strategy at serious risk. To derisk this, we should acquire some of these pieces from leading companies.
In short, many of the tech mergers that critics portray as killer acquisitions are just as likely to be attempts by firms to compete head-on with incumbents. This “leveling up” is precisely the sort of beneficial outcome that antitrust laws were designed to promote.
Building Products Is Hard
Critics are often quick to apply the “killer acquisition” label to any merger where a large platform is seeking to enter or reinforce its presence in an adjacent market. The preceding paragraphs demonstrate that it’s not that simple, as these mergers often enable firms to improve their competitive position in the adjacent market. For obvious reasons, antitrust authorities and policymakers should be careful not to thwart this competition.
The harder part is how to separate the wheat from the chaff. While I don’t have a definitive answer, an easy first step would be for authorities to more seriously consider the supply side of the equation.
Building a new product is incredibly hard, even for the most successful tech firms. Microsoft famously failed with its Zune music player and Windows Phone. The Google+ social network never gained any traction. Meta’s foray into the cryptocurrency industry was a sobering experience. Amazon’s Fire Phone bombed. Even Apple, which usually epitomizes Silicon Valley firms’ ability to enter new markets, has had its share of dramatic failures: Apple Maps, its Ping social network, and the first Home Pod, to name a few.
To put it differently, policymakers should not assume that internal growth is always a realistic alternative to a merger. Instead, they should carefully examine whether such a strategy is timely, cost-effective, and likely to succeed.
This is obviously a daunting task. Firms will struggle to dispositively show that they need to acquire the target firm in order to effectively compete against an incumbent. The question essentially hinges on the quality of the firm’s existing management, engineers, and capabilities. All of these are difficult—perhaps even impossible—to measure. At the very least, policymakers can improve the odds of reaching a correct decision by approaching these mergers with an open mind.
Under Chair Lina Khan’s tenure, the FTC has opted for the opposite approach and taken a decidedly hostile view of tech acquisitions. The commission sued to block both Meta’s purchase of Within and Microsoft’s acquisition of Activision Blizzard. Likewise, several economists—notably Tommasso Valletti—have called for policymakers to reverse the burden of proof in merger proceedings, and opined that all mergers should be viewed with suspicion because, absent efficiencies, they always reduce competition.
Unfortunately, this skeptical approach is something of a self-fulfilling prophecy: when authorities view mergers with suspicion, they are likely to be dismissive of the benefits discussed above. Mergers will be blocked and entry into adjacent markets will occur via internal growth.
Large tech companies’ many failed attempts to enter adjacent markets via internal growth suggest that such an outcome would ultimately harm the digital economy. Too many “boss battles” will needlessly be lost, depriving consumers of precious competition and destroying startup companies’ exit strategies.
[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at thesymposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]
Federal Trade Commission (FTC) Chair Lina Khan has just sent her holiday wishlist to Santa Claus. It comes in the form of a policy statement on unfair methods of competition (UMC) that the FTC approved last week by a 3-1 vote. If there’s anything to be gleaned from the document, it’s that Khan and the agency’s majority bloc wish they could wield the same powers as Margrethe Vestager does in the European Union. Luckily for consumers, U.S. courts are unlikely to oblige.
Signed by the commission’s three Democratic commissioners, the UMC policy statement contains language that would be completely at home in a decision of the European Commission. It purports to reorient UMC enforcement (under Section 5 of the FTC Act) around typically European concepts, such as “competition on the merits.” This is an unambiguous repudiation of the rule of reason and, with it, the consumer welfare standard.
Unfortunately for its authors, these European-inspired aspirations are likely to fall flat. For a start, the FTC almost certainly does not have the power to enact such sweeping changes. More fundamentally, these concepts have been tried in the EU, where they have proven to be largely unworkable. On the one hand, critics (including the European judiciary) have excoriated the European Commission for its often economically unsound policymaking—enabled by the use of vague standards like “competition on the merits.” On the other hand, the Commission paradoxically believes that its competition powers are insufficient, creating the need for even stronger powers. The recently passed Digital Markets Act (DMA) is designed to fill this need.
As explained below, there is thus every reason to believe the FTC’s UMC statement will ultimately go down as a mistake, brought about by the current leadership’s hubris.
A Statement Is Just That
The first big obstacle to the FTC’s lofty ambitions is that its leadership does not have the power to rewrite either the FTC Act or courts’ interpretation of it. The agency’s leadership understands this much. And with that in mind, they ostensibly couch their statement in the case law of the U.S. Supreme Court:
Consistent with the Supreme Court’s interpretation of the FTC Act in at least twelve decisions, this statement makes clear that Section 5 reaches beyond the Sherman and Clayton Acts to encompass various types of unfair conduct that tend to negatively affect competitive conditions.
It is telling, however, that the cases cited by the agency—in a naked attempt to do away with economic analysis and the consumer welfare standard—are all at least 40 years old. Antitrust and consumer-protection laws have obviously come a long way since then, but none of that is mentioned in the statement. Inconvenient case law is simply shrugged off. To make matters worse, even the cases the FTC cites provide, at best, exceedingly weak support for its proposed policy.
For instance, as Commissioner Christine Wilson aptly notes in her dissenting statement, “the policy statement ignores precedent regarding the need to demonstrate anticompetitive effects.” Chief among these is the Boise Cascade Corp. v. FTC case, where the 9th U.S. Circuit Court of Appeals rebuked the FTC for failing to show actual anticompetitive effects:
In truth, the Commission has provided us with little more than a theory of the likely effect of the challenged pricing practices. While this general observation perhaps summarizes all that follows, we offer the following specific points in support of our conclusion.
There is a complete absence of meaningful evidence in the record that price levels in the southern plywood industry reflect an anticompetitive effect.
In short, the FTC’s statement is just that—a statement. Gus Hurwitz summarized this best in his post:
Today’s news that the FTC has adopted a new UMC Policy Statement is just that: mere news. It doesn’t change the law. It is non-precedential and lacks the force of law. It receives the benefit of no deference. It is, to use a term from the consumer-protection lexicon, mere puffery.
Lina’s European Dream
But let us imagine, for a moment, that the FTC has its way and courts go along with its policy statement. Would this be good for the American consumer? In order to answer this question, it is worth looking at competition enforcement in the European Union.
There are, indeed, striking similarities between the FTC’s policy statement and European competition law. Consider the resemblance between the following quotes, drawn from the FTC’s policy statement (“A” in each example) and from the European competition sphere (“B” in each example).
Example 1 – Competition on the merits and the protection of competitors:
A. The method of competition must be unfair, meaning that the conduct goes beyond competition on the merits.… This may include, for example, conduct that tends to foreclose or impair the opportunities of market participants, reduce competition between rivals, limit choice, or otherwise harm consumers. (here)
B. The emphasis of the Commission’s enforcement activity… is on safeguarding the competitive process… and ensuring that undertakings which hold a dominant position do not exclude their competitors by other means than competing on the merits… (here)
Example 2 – Proof of anticompetitive harm:
A. “Unfair methods of competition” need not require a showing of current anticompetitive harm or anticompetitive intent in every case. … [T]his inquiry does not turn to whether the conduct directly caused actual harm in the specific instance at issue. (here)
B. The Commission cannot be required… systematically to establish a counterfactual scenario…. That would, moreover, oblige it to demonstrate that the conduct at issue had actual effects, which… is not required in the case of an abuse of a dominant position, where it is sufficient to establish that there are potential effects. (here)
Example 3 – Multiple goals:
A. Given the distinctive goals of Section 5, the inquiry will not focus on the “rule of reason” inquiries more common in cases under the Sherman Act, but will instead focus on stopping unfair methods of competition in their incipiency based on their tendency to harm competitive conditions. (here)
B. In its assessment the Commission should pursue the objectives of preserving and fostering innovation and the quality of digital products and services, the degree to which prices are fair and competitive, and the degree to which quality or choice for business users and for end users is or remains high. (here)
Beyond their cosmetic resemblances, these examples reflect a deeper similarity. The FTC is attempting to introduce three core principles that also undergird European competition enforcement. The first is that enforcers should protect “the competitive process” by ensuring firms compete “on the merits,” rather than a more consequentialist goal like the consumer welfare standard (which essentially asks how a given practice affects economic output). The second is that enforcers should not be required to establish that conduct actually harms consumers. Instead, they need only show that such an outcome is (or will be) possible. The third principle is that competition policies pursue multiple, sometimes conflicting, goals.
In short, the FTC is trying to roll back U.S. enforcement to a bygone era predating the emergence of the consumer welfare standard (which is somewhat ironic for the agency’s progressive leaders). And this vision of enforcement is infused with elements that appear to be drawn directly from European competition law.
Europe Is Not the Land of Milk and Honey
All of this might not be so problematic if the European model of competition enforcement that the FTC now seeks to emulate was an unmitigated success, but that could not be further from the truth. As Geoffrey Manne, Sam Bowman, and I argued in a recently published paper, the European model has several shortcomings that militate against emulating it (the following quotes are drawn from that paper). These problems would almost certainly arise if the FTC’s statement was blessed by courts in the United States.
For a start, the more open-ended nature of European competition law makes it highly vulnerable to political interference. This is notably due to its multiple, vague, and often conflicting goals, such as the protection of the “competitive process”:
Because EU regulators can call upon a large list of justifications for their enforcement decisions, they are free to pursue cases that best fit within a political agenda, rather than focusing on the limited practices that are most injurious to consumers. In other words, there is largely no definable set of metrics to distinguish strong cases from weak ones under the EU model; what stands in its place is political discretion.
Politicized antitrust enforcement might seem like a great idea when your party is in power but, as Milton Friedman wisely observed, the mark of a strong system of government is that it operates well with the wrong person in charge. With this in mind, the FTC’s current leadership would do well to consider what their political opponents might do with these broad powers—such as using Section 5 to prevent online platforms from moderating speech.
A second important problem with the European model is that, because of its competitive-process goal, it does not adequately distinguish between exclusion resulting from superior efficiency and anticompetitive foreclosure:
By pursuing a competitive process goal, European competition authorities regularly conflate desirable and undesirable forms of exclusion precisely on the basis of their effect on competitors. As a result, the Commission routinely sanctions exclusion that stems from an incumbent’s superior efficiency rather than welfare-reducing strategic behavior, and routinely protects inefficient competitors that would otherwise rightly be excluded from a market.
This vastly enlarges the scope of potential antitrust liability, leading to risks of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms, while increasing compliance costs because of reduced legal certainty. Ultimately, this may hamper technological evolution and protect inefficient firms whose eviction from the market is merely a reflection of consumer preferences.
Finally, the European model results in enforcers having more discretion and enjoying greater deference from the courts:
[T]he EU process is driven by a number of laterally equivalent, and sometimes mutually exclusive, goals.… [A] large problem exists in the discretion that this fluid arrangement of goals yields.
The Microsoft case illustrates this problem well. In Microsoft, the Commission could have chosen to base its decision on a number of potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice”. The Commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains because “consumer choice” among a variety of media players was more important.
In short, the European model sorely lacks limiting principles. This likely explains why the European Court of Justice has started to pare back the commission’s powers in a series of recent cases, including Intel, Post Danmark, Cartes Bancaires, and Servizio Elettrico Nazionale. These rulings appear to be an explicit recognition that overly broad competition enforcement not only fails to benefit consumers but, more fundamentally, is incompatible with the rule of law.
It is unfortunate that the FTC is trying to emulate a model of competition enforcement that—even in the progressively minded European public sphere—is increasingly questioned and cast aside as a result of its multiple shortcomings.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]
Philip K Dick’s novella “The Minority Report” describes a futuristic world without crime. This state of the world is achieved thanks to the visions of three mutants—so-called “precogs”—who predict crimes before they occur, thereby enabling law enforcement to incarcerate people for crimes they were going to commit.
This utopia unravels when the protagonist—the head of the police Precrime division, who is himself predicted to commit a murder—learns that the precogs often produce “minority reports”: i.e., visions of the future that differ from one another. The existence of these alternate potential futures undermine the very foundations of Precrime. For every crime that is averted, an innocent person may be convicted of a crime they were not going to commit.
You might be wondering what any of this has to do with antitrust and last week’s Truth on the Marketsymposium on Antitrust’s Uncertain Future. Given the recent adoption of the European Union’s Digital Markets Act (DMA) and the prospect that Congress could soon vote on the American Innovation and Choice Online Act (AICOA), we asked contributors to write short pieces describing what the future might look like—for better or worse—under these digital-market regulations, or in their absence.
The resulting blog posts offer a “minority report” of sorts. Together, they dispel the myth that these regulations would necessarily give rise to a brighter future of intensified competition, innovation, and improved online services. To the contrary, our contributors cautioned—albeit with varying degrees of severity—that these regulations create risks that policymakers should not ignore.
The Majority Report
If policymakers like European Commissioner for Competition Margrethe Vestager, Federal Trade Commission Chair Lina Khan, and Sen. Amy Klobuchar (D-Minn.) are to be believed, a combination of tougher regulations and heightened antitrust enforcement is the only way to revitalize competition in digital markets. As Klobuchar argues on her website:
To ensure our future economic prosperity, America must confront its monopoly power problem and restore competitive markets. … [W]e must update our antitrust laws for the twenty-first century to protect the competitive markets that are the lifeblood of our economy.
Speaking of the recently passed DMA, Vestager suggested the regulation could spark an economic boom, drawing parallels with the Renaissance:
The work we put into preserving and strengthening our Single Market will equip us with the means to show the world that our path based on open trade and fair competition is truly better. After all, Bruges did not become great by conquest and ruthless occupation. It became great through commerce and industry.
Several antitrust scholars have been similarly bullish about the likely benefits of such regulations. For instance, Fiona Scott Morton, Steven Salop, and David Dinielli write that:
It is an appropriate expression of democracy for Congress to enact pro-competitive statutes to maintain the vibrancy of the online economy and allow for continued innovation that benefits non-platform businesses as well as end users.
In short, there is a widespread belief that such regulations would make the online world more competitive and innovative, to the benefit of consumers.
The Minority Reports
To varying degrees, the responses to our symposium suggest proponents of such regulations may be falling prey to what Harold Demsetz called “the nirvana fallacy.” In other words, it is wrong to assume that the resulting enforcement would be costless and painless for consumers.
Even the symposium’s pieces belonging to the literary realms of sci-fi and poetry shed a powerful light on the deep-seated problems that underlie contemporary efforts to make online industries “more contestable and fair.” As several scholars highlighted, such regulations may prevent firms from designing new and improved products, or from maintaining existing ones. Among my favorite passages was this excerpt from Daniel Crane’s fictional piece about a software engineer in Helsinki trying to integrate restaurant and hotel ratings into a vertical search engine:
“We’ve been watching how you’re coding the new walking tour search vertical. It seems that you are designing it to give preference to restaurants, cafès, and hotels that have been highly rated by the Tourism Board.”
“Yes, that’s right. Restaurants, cafès, and hotels that have been rated by the Tourism Board are cleaner, safer, and more convenient. That’s why they have been rated.”
“But you are forgetting that the Tourism Board is one of our investors. This will be considered self-preferencing.”
Even if a covered platform could establish that a challenged practice would maintain or substantially enhance the platform’s core functionality, it would also have to prove that the conduct was “narrowly tailored” and “reasonably necessary” to achieve the desired end, and, for many behaviors, the “le[ast] discriminatory means” of doing so. That is a remarkably heavy burden…. It is likely, then, that AICOA would break existing products and services and discourage future innovation.
Several of our contributors voiced fears that bans on self-preferencing would prevent platforms from acquiring startups that complement their core businesses, thus making it harder to launch new services and deterring startup investment. For instance, in my alternate history post, I argued that such bans might have prevented Google’s purchase of Android, thus reducing competition in the mobile phone industry.
A second important objection was that self-preferencing bans are hard to apply consistently. Policymakers would notably have to draw lines between the different components that make up an economic good. As Ramsi Woodcock wrote in a poem:
You: The meaning of component, We can always redefine. From batteries to molecules, We can draw most any line.
This lack of legal certainty will prove hard to resolve. Geoffrey Manne noted that regulatory guidelines were unlikely to be helpful in this regard:
Indeed, while laws are sometimes purposefully vague—operating as standards rather than prescriptive rules—to allow for more flexibility, the concepts introduced by AICOA don’t even offer any cognizable standards suitable for fine-tuning.
Alden Abbott was similarly concerned about the vague language that underpins AICOA:
There is, however, one inescapable reality—as night follows day, passage of AICOA would usher in an extended period of costly litigation over the meaning of a host of AICOA terms. … The history of antitrust illustrates the difficulties inherent in clarifying the meaning of novel federal statutory language. It was not until 21 years after passage of the Sherman Antitrust Act that the Supreme Court held that Section 1 of the act’s prohibition on contracts, combinations, and conspiracies “in restraint of trade” only covered unreasonable restraints of trade.
Our contributors also argued that bans on self-preferencing and interoperability mandates might be detrimental to users’ online experience. Lazar Radic and Friso Bostoen both wrote pieces taking readers through a typical day in worlds where self-preferencing is prohibited. Neither was particularly utopian. In his satirical piece, Lazar Radic imagined an online shopping experience where all products are given equal display:
“Time to do my part,” I sigh. My eyes—trained by years of practice—dart from left to right and from right to left, carefully scrutinizing each coffee capsule on offer for an equal number of seconds. … After 13 brands and at least as many flavors, I select the platforms own brand, “Basic”… and then answer a series of questions to make sure I have actually given competitors’ products fair consideration.
Closer to the world we live in, Friso Bostoen described how going through a succession of choice screens—a likely outcome of regulations such as AICOA and the DMA—would be tiresome for consumers:
A new fee structure… God, save me from having to tap ‘learn more’ to find out what that means. I’ve had to learn more about the app ecosystem than is good for me already.
Finally, our symposium highlighted several other ways in which poorly designed online regulations may harm consumers. Stephen Dnes concluded that mandatory data-sharing regimes will deter companies from producing valuable data in the first place. Julie Carlson argued that prohibiting platforms from preferencing their own goods would disproportionately harm low-income consumers. And Aurelien Portuese surmised that, if passed into law, AICOA would dampen firms’ incentives to invest in new services. Last, but not least, in a co-authored piece, Filip Lubinski and Lazar Radic joked that self-preferencing bans could be extended to the offline world:
The success of AICOA has opened our eyes to an even more ancient and perverse evil: self-preferencing in offline markets. It revealed to us that—for centuries, if not millennia—companies in various industries—from togas to wine, from cosmetics to insurance—had, in fact, always preferred their own initiatives over those of their rivals!
The Problems of Online Precrime
Online regulations like AICOA and the DMA mark a radical shift from existing antitrust laws. They move competition policy from a paradigm of ex post enforcement, based upon a detailed case-by-case analysis of effects, to one of ex ante prohibitions.
Despite obvious and superficial differences, there are clear parallels between this new paradigm and the world of “The Minority Report”: firms would be punished for behavior that has not yet transpired or is not proven to harm consumers.
This might be fine if we knew for certain that the prohibited conduct would harm consumers (i.e., if there were no “minority reports,” to use our previous analogy). But every entry in our symposium suggests things are not that simple. There are a wide range of outcomes and potential harms associated with the regulation of digital markets. This calls for a more calibrated approach to digital-competition policy, as opposed to the precrime of AICOA and the DMA.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]
May 2007, Palo Alto
The California sun shone warmly on Eric Schmidt’s face as he stepped out of his car and made his way to have dinner at Madera, a chic Palo Alto restaurant.
Dining out was a welcome distraction from the endless succession of strategy meetings with the nitpickers of the law department, which had been Schmidt’s bread and butter for the last few months. The lawyers seemed to take issue with any new project that Google’s engineers came up with. “How would rivals compete with our maps?”; “Our placement should be no less favorable than rivals’’; etc. The objections were endless.
This is not how things were supposed to be. When Schmidt became Google’s chief executive officer in 2001, his mission was to take the company public and grow the firm into markets other than search. But then something unexpected happened. After campaigning on an anti-monopoly platform, a freshman senator from Minnesota managed to get her anti-discrimination bill through Congress in just her first few months in office. All companies with a market cap of more than $150 billion were now prohibited from favoring their own products. Google had recently crossed that Rubicon, putting a stop to years of carefree expansion into new markets.
But today was different. The waiter led Schmidt to his table overlooking Silicon Valley. His acquaintance was already seated.
With his tall and slender figure, Andy Rubin had garnered quite a reputation among Silicon Valley’s elite. After engineering stints at Apple and Motorola, developing various handheld devices, Rubin had set up his own shop. The idea was bold: develop the first open mobile platform—based on Linux, nonetheless. Rubin had pitched the project to Google in 2005 but given the regulatory uncertainty over the future of antitrust—the same wave of populist sentiment that would carry Klobuchar to office one year later—Schmidt and his team had passed.
“There’s no money in open source,” the company’s CFO ruled. Schmidt had initially objected, but with more pressing matters to deal with, he ultimately followed his CFO’s advice.
Schmidt and Rubin were exchanging pleasantries about Microsoft and Java when the meals arrived–sublime Wagyu short ribs and charred spring onions paired with a 1986 Chateau Margaux.
Rubin finally cut to the chase. “Our mobile operating system will rely on state-of-the-art touchscreen technology. Just like the device being developed by Apple. Buying Android today might be your only way to avoid paying monopoly prices to access Apple’s mobile users tomorrow.”
Schmidt knew this all too well: The future was mobile, and few companies were taking Apple’s upcoming iPhone seriously enough. Even better, as a firm, Android was treading water. Like many other startups, it had excellent software but no business model. And with the Klobuchar bill putting the brakes on startup investment—monetizing an ecosystem had become a delicate legal proposition, deterring established firms from acquiring startups–Schmidt was in the middle of a buyer’s market. “Android we could make us a force to reckon with” Schmidt thought to himself.
But he quickly shook that thought, remembering the words of his CFO: “There is no money in open source.” In an ideal world, Google would have used Android to promote its search engine—placing a search bar on Android users to draw users to its search engine—or maybe it could have tied a proprietary app store to the operating system, thus earning money from in-app purchases. But with the Klobuchar bill, these were no longer options. Not without endless haggling with Google’s planning committee of lawyers.
And they would have a point, of course. Google risked heavy fines and court-issued injunctions that would stop the project in its tracks. Such risks were not to be taken lightly. Schmidt needed a plan to make the Android platform profitable while accommodating Google’s rivals, but he had none.
The desserts were served, Schmidt steered the conversation to other topics, and the sun slowly set over Sand Hill Road.
Present Day, Cupertino
Apple continues to dominate the smartphone industry with little signs of significant competition on the horizon. While there are continuing rumors that Google, Facebook, or even TikTok might enter the market, these have so far failed to transpire.
Google’s failed partnership with Samsung, back in 2012, still looms large over the industry. After lengthy talks to create an open mobile platform failed to materialize, Google ultimately entered into an agreement with the longstanding mobile manufacturer. Unfortunately, the deal was mired by antitrust issues and clashing visions—Samsung was believed to favor a closed ecosystem, rather than the open platform envisioned by Google.
The sense that Apple is running away with the market is only reinforced by recent developments. Last week, Tim Cook unveiled the company’s new iPhone 11—the first ever mobile device to come with three cameras. With an eye-watering price tag of $1,199 for the top-of-the-line Pro model, it certainly is not cheap. In his presentation, Cook assured consumers Apple had solved the security issues that have been an important bugbear for the iPhone and its ecosystem of competing app stores.
Analysts expect the new range of devices will help Apple cement the iPhone’s 50% market share. This is especially likely given the important challenges that Apple’s main rivals continue to face.
The Windows Phone’s reputation for buggy software continues to undermine its competitive position, despite its comparatively low price point. Andy Rubin, the head of the Windows Phone, was reassuring in a press interview, but there is little tangible evidence he will manage to successfully rescue the flailing ship. Meanwhile, Huawei has come under increased scrutiny for the threats it may pose to U.S. national security. The Chinese manufacturer may face a U.S. sales ban, unless the company’s smartphone branch is sold to a U.S. buyer. Oracle is said to be a likely candidate.
The sorry state of mobile competition has become an increasingly prominent policy issue. President Klobuchar took to Twitter and called on mobile-device companies to refrain from acting as monopolists, intimating elsewhere that failure to do so might warrant tougher regulation than her anti-discrimination bill:
A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.
It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:
How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?
Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).
When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.
As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.
The Shaky Foundations of Attention Markets Theory
Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.
First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).
There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:
This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”
Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:
But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.
The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.
The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.
None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.
First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.
This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.
This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.
Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.
This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.
Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:
An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.
In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.
In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.
The Bait and Switch: Qualitative Indicia
These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:
Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method.
Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.
This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”
This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.
A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences.
There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching.
The Way Forward
The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.
As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.
Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.
Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:
The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.
Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.
In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.
The Senate Judiciary Committee is set to debate S. 2992, the American Innovation and Choice Online Act (or AICOA) during a markup session Thursday. If passed into law, the bill would force online platforms to treat rivals’ services as they would their own, while ensuring their platforms interoperate seamlessly.
The bill marks the culmination of misguided efforts to bring Big Tech to heel, regardless of the negative costs imposed upon consumers in the process. ICLE scholars have written about these developments in detail since the bill was introduced in October.
Below are 10 significant misconceptions that underpin the legislation.
1. There Is No Evidence that Self-Preferencing Is Generally Harmful
Self-preferencing is a normal part of how platforms operate, both to improve the value of their core products and to earn returns so that they have reason to continue investing in their development.
Platforms’ incentives are to maximize the value of their entire product ecosystem, which includes both the core platform and the services attached to it. Platforms that preference their own products frequently end up increasing the total market’s value by growing the share of users of a particular product. Those that preference inferior products end up hurting their attractiveness to users of their “core” product, exposing themselves to competition from rivals.
As Geoff Manne concludes, the notion that it is harmful (notably to innovation) when platforms enter into competition with edge providers is entirely speculative. Indeed, a range of studies show that the opposite is likely true. Platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.
Consider a few examples from the empirical literature:
Li and Agarwal (2017) find that Facebook’s integration of Instagram led to a significant increase in user demand both for Instagram itself and for the entire category of photography apps. Instagram’s integration with Facebook increased consumer awareness of photography apps, which benefited independent developers, as well as Facebook.
Foerderer, et al. (2018) find that Google’s 2015 entry into the market for photography apps on Android created additional user attention and demand for such apps generally.
Cennamo, et al. (2018) find that video games offered by console firms often become blockbusters and expand the consoles’ installed base. As a result, these games increase the potential for all independent game developers to profit from their games, even in the face of competition from first-party games.
Finally, while Zhu and Liu (2018) is often held up as demonstrating harm from Amazon’s competition with third-party sellers on its platform, its findings are actually far from clear-cut. As co-author Feng Zhu noted in the Journal of Economics & Management Strategy: “[I]f Amazon’s entries attract more consumers, the expanded customer base could incentivize more third‐ party sellers to join the platform. As a result, the long-term effects for consumers of Amazon’s entry are not clear.”
2. Interoperability Is Not Costless
There are many things that could be interoperable, but aren’t. The reason not everything is interoperable is because interoperability comes with costs, as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds.
As Sam Bowman has observed, there are often costs that prevent interoperability from being worth the tradeoff, such as that:
It might be too costly to implement and/or maintain.
It might prescribe a certain product design and prevent experimentation and innovation.
It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
It might increase the risk of something not working, or of security breaches.
It might prevent certain pricing models that increase output.
It might compromise some element of the product or service that benefits specifically from not being interoperable.
In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen: consumers will choose products that are not interoperable.
In short, we cannot infer from the mere absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.
3. Consumers Often Prefer Closed Ecosystems
Digital markets could have taken a vast number of shapes. So why have they gravitated toward the very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones?
Indeed, if recent commentary is to be believed, it is the latter that should succeed, because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into that breach. But this does not seem to be happening in the digital economy.
The naïve answer is to say that the absence of “open” systems is precisely the problem. What’s harder is to try to actually understand why. As I have written, there are many reasons that consumers might prefer “closed” systems, even when they have to pay a premium for them.
Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and on consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform.
It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision.
They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire.
Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. What some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said it best when he quipped that economists always find a monopoly explanation for things that they simply fail to understand.
4. Data Portability Can Undermine Security and Privacy
As explained above, platforms that are more tightly controlled can be regulated by the platform owner to avoid some of the risks present in more open platforms. Apple’s App Store, for example, is a relatively closed and curated platform, which gives users assurance that apps will meet a certain standard of security and trustworthiness.
Along similar lines, there are privacy issues that arise from data portability. Even a relatively simple requirement to make photos available for download can implicate third-party interests. Making a user’s photos more broadly available may tread upon the privacy interests of friends whose faces appear in those photos. Importing those photos to a new service potentially subjects those individuals to increased and un-bargained-for security risks.
As Sam Bowman and Geoff Manne observe, this is exactly what happened with Facebook and its Social Graph API v1.0, ultimately culminating in the Cambridge Analytica scandal. Because v1.0 of Facebook’s Social Graph API permitted developers to access information about a user’s friends without consent, it enabled third-party access to data about exponentially more users. It appears that some 270,000 users granted data access to Cambridge Analytica, from which the company was able to obtain information on 50 million Facebook users.
In short, there is often no simple solution to implement interoperability and data portability. Any such program—whether legally mandated or voluntarily adopted—will need to grapple with these and other tradeoffs.
5. Network Effects Are Rarely Insurmountable
Several scholars in recent years have called for more muscular antitrust intervention in networked industries on grounds that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in and raise entry barriers for potential rivals (see here, here, and here). But there are countless counterexamples where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.
Zoom is one of the most salient instances. As I wrote in April 2019 (a year before the COVID-19 pandemic):
To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.
Geoff Manne and Alec Stapp have put forward a multitude of other examples, including: the demise of Yahoo; the disruption of early instant-messaging applications and websites; and MySpace’s rapid decline. In all of these cases, outcomes did not match the predictions of theoretical models.
More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and powerful algorithm are the most likely explanations for its success.
While these developments certainly do not disprove network-effects theory, they eviscerate the belief, common in antitrust circles, that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. The question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet, this question is systematically omitted from most policy discussions.
6. Profits Facilitate New and Exciting Platforms
As I wrote in August 2020, the relatively closed model employed by several successful platforms (notably Apple’s App Store, Google’s Play Store, and the Amazon Retail Platform) allows previously unknown developers/retailers to rapidly expand because (i) users do not have to fear their apps contain some form of malware and (ii) they greatly reduce payments frictions, most notably security-related ones.
While these are, indeed, tremendous benefits, another important upside seems to have gone relatively unnoticed. The “closed” business model also gives firms significant incentives to develop new distribution mediums (smart TVs spring to mind) and to improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.
The economics of two-sided markets are enlightening here. For example, Apple and Google’s app stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks.” That is, they compete aggressively (among themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users.
This dynamic gives firms significant incentive to continue to attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video, and games was one of the driving forces behind the launch of the iPad.
This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms, as would likely be the case under the American Innovation and Choice Online Act.
7. Large Market Share Does Not Mean Anticompetitive Outcomes
Scholars routinely cite the putatively strong concentration of digital markets to argue that Big Tech firms do not face strong competition. But this is a non sequitur. Indeed, as economists like Joseph Bertrand and William Baumol have shown, what matters is not whether markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, that alone will discipline incumbents’ behavior.
Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.
Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).
Finally, and perhaps most importantly, many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.
Unfortunately, critics’ failure to meaningfully grapple with these issues serves to shape the “conventional wisdom” in tech-policy debates.
8. Vertical Integration Generally Benefits Consumers
Vertical behavior of digital firms—whether through mergers or through contract and unilateral action—frequently arouses the ire of critics of the current antitrust regime. Many such critics point to a few recent studies that cast doubt on the ubiquity of benefits from vertical integration. But the findings of these few studies are regularly overstated and, even if taken at face value, represent a just minuscule fraction of the collected evidence, which overwhelmingly supports vertical integration.
There is strong and longstanding empirical evidence that vertical integration is competitively benign. This includes widely acclaimed work by economists Francine Lafontaine (former director of the Federal Trade Commission’s Bureau of Economics under President Barack Obama) and Margaret Slade, whose meta-analysis led them to conclude:
[U]nder most circumstances, profit-maximizing vertical integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked.
In short, there is a substantial body of both empirical and theoretical research showing that vertical integration (and the potential vertical discrimination and exclusion to which it might give rise) is generally beneficial to consumers. While it is possible that vertical mergers or discrimination could sometimes cause harm, the onus is on the critics to demonstrate empirically where this occurs. No legitimate interpretation of the available literature would offer a basis for imposing a presumption against such behavior.
9. There Is No Such Thing as Data Network Effects
Although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.
This is supported by significant empirical evidence. As was shown by the survey pf the empirical literature that Geoff Manne and I performed (published in the George Mason Law Review), data generally entails diminishing marginal returns:
Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around. Indeed, Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace.
Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.
10. Antitrust Enforcement Has Not Been Lax
The popular narrative has it that lax antitrust enforcement has led to substantially increased concentration, strangling the economy, harming workers, and expanding dominant firms’ profit margins at the expense of consumers. Much of the contemporary dissatisfaction with antitrust arises from a suspicion that overly lax enforcement of existing laws has led to record levels of concentration and a concomitant decline in competition. But both beliefs—lax enforcement and increased anticompetitive concentration—wither under more than cursory scrutiny.
The number of Sherman Act cases brought by the federal antitrust agencies, meanwhile, has been relatively stable in recent years, but several recent blockbuster cases have been brought by the agencies and private litigants, and there has been no shortage of federal and state investigations. The vast majority of Section 2 cases dismissed on the basis of the plaintiff’s failure to show anticompetitive effect were brought by private plaintiffs pursuing treble damages; given the incentives to bring weak cases, it cannot be inferred from such outcomes that antitrust law is ineffective. But, in any case, it is highly misleading to count the number of antitrust cases and, using that number alone, to make conclusions about how effective antitrust law is. Firms act in the shadow of the law, and deploy significant legal resources to make sure they avoid activity that would lead to enforcement actions. Thus, any given number of cases brought could be just as consistent with a well-functioning enforcement regime as with an ill-functioning one.
The upshot is that naïvely counting antitrust cases (or the purported lack thereof), with little regard for the behavior that is deterred or the merits of the cases that are dismissed does not tell us whether or not antitrust enforcement levels are optimal.
Intermediaries may not be the consumer welfare hero we want, but more often than not, they are one that we need.
In policy discussions about the digital economy, a background assumption that frequently underlies the discourse is that intermediaries and centralization always and only serve as a cost to consumers, and to society more generally. Thus, one commonly sees arguments that consumers would be better off if they could freely combine products from different trading partners. According to this logic, bundled goods, walled gardens, and other intermediaries are always to be regarded with suspicion, while interoperability, open source, and decentralization are laudable features of any market.
However, as with all economic goods, intermediation offers both costs and benefits. The challenge for market players is to assess these tradeoffs and, ultimately, to produce the optimal level of intermediation.
As one example, some observers assume that purchasing food directly from a producer benefits consumers because intermediaries no longer take a cut of the final purchase price. But this overlooks the tremendous efficiencies supermarkets can achieve in terms of cost savings, reduced carbon emissions (because consumers make fewer store trips), and other benefits that often outweigh the costs of intermediation.
The same anti-intermediary fallacy is plain to see in countless other markets. For instance, critics readily assume that insurance, mortgage, and travel brokers are just costly middlemen.
This unduly negative perception is perhaps even more salient in the digital world. Policymakers are quick to conclude that consumers are always better off when provided with “more choice.” Draft regulations of digital platforms have been introduced on both sides of the Atlantic that repeat this faulty argument ad nauseam, as do some antitrust decisions.
Even the venerable Tyler Cowen recently appeared to sing the praises of decentralization, when discussing the future of Web 3.0:
One person may think “I like the DeFi options at Uniswap,” while another may say, “I am going to use the prediction markets over at Hedgehog.” In this scenario there is relatively little intermediation and heavy competition for consumer attention. Thus most of the gains from competition accrue to the users. …
… I don’t know if people are up to all this work (or is it fun?). But in my view this is the best-case scenario — and the most technologically ambitious. Interestingly, crypto’s radical ability to disintermediate, if extended to its logical conclusion, could bring about a radical equalization of power that would lower the prices and values of the currently well-established crypto assets, companies and platforms.
While disintermediation certainly has its benefits, critics often gloss over its costs. For example, scams are practically nonexistent on Apple’s “centralized” App Store but are far more prevalent with Web3 services. Apple’s “power” to weed out nefarious actors certainly contributes to this difference. Similarly, there is a reason that “middlemen” like supermarkets and travel agents exist in the first place. They notably perform several complex tasks (e.g., searching for products, negotiating prices, and controlling quality) that leave consumers with a manageable selection of goods.
Returning to the crypto example, besides being a renowned scholar, Tyler Cowen is also an extremely savvy investor. What he sees as fun investment choices may be nightmarish (and potentially dangerous) decisions for less sophisticated consumers. The upshot is that intermediaries are far more valuable than they are usually given credit for.
Bringing People Together
The reason intermediaries (including online platforms) exist is to reduce transaction costs that suppliers and customers would face if they tried to do business directly. As Daniel F. Spulber argues convincingly:
Markets have two main modes of organization: decentralized and centralized. In a decentralized market, buyers and sellers match with each other and determine transaction prices. In a centralized market, firms act as intermediaries between buyers and sellers.
[W]hen there are many buyers and sellers, there can be substantial transaction costs associated with communication, search, bargaining, and contracting. Such transaction costs can make it more difficult to achieve cross-market coordination through direct communication. Intermediary firms have various means of reducing transaction costs of decentralized coordination when there are many buyers and sellers.
This echoes the findings of Nobel laureate Ronald Coase, who observed that firms emerge when they offer a cheaper alternative to multiple bilateral transactions:
The main reason why it is profitable to establish a firm would seem to be that there is a cost of using the price mechanism. The most obvious cost of “organising ” production through the price mechanism is that of discovering what the relevant prices are. […] The costs of negotiating and concluding a separate contract for each exchange transaction which takes place on a market must also be taken into account.
Economists generally agree that online platforms also serve this cost-reduction function. For instance, David Evans and Richard Schmalensee observe that:
Multi-sided platforms create value by bringing two or more different types of economic agents together and facilitating interactions between them that make all agents better off.
It’s easy to see the implications for today’s competition-policy debates, and for the online intermediaries that many critics would like to see decentralized. Particularly salient examples include app store platforms (such as the Apple App Store and the Google Play Store); online retail platforms (such as Amazon Marketplace); and online travel agents (like Booking.com and Expedia). Competition policymakers have embarked on countless ventures to “open up” these platforms to competition, essentially moving them further toward disintermediation. In most of these cases, however, policymakers appear to be fighting these businesses’ very raison d’être.
For example, the purpose of an app store is to curate the software that users can install and to offer payment solutions; in exchange, the store receives a cut of the proceeds. If performing these tasks created no value, then to a first approximation, these services would not exist. Users would simply download apps via their web browsers, and the most successful smartphones would be those that allowed users to directly install apps (“sideloading,” to use the more technical terms). Forcing these platforms to “open up” and become neutral is antithetical to the value proposition they offer.
Calls for retail and travel platforms to stop offering house brands or displaying certain products more favorably are equally paradoxical. Consumers turn to these platforms because they want a selection of goods. If that was not the case, users could simply bypass the platforms and purchase directly from independent retailers or hotels.Critics sometimes retort that some commercial arrangements, such as “most favored nation” clauses, discourage consumers from doing exactly this. But that claim only reinforces the point that online platforms must create significant value, or they would not be able to obtain such arrangements in the first place.
All of this explains why characterizing these firms as imposing a “tax” on their respective ecosystems is so deeply misleading. The implication is that platforms are merely passive rent extractors that create no value. Yet, barring the existence of market failures, both their existence and success is proof to the contrary. To argue otherwise places no faith in the ability of firms and consumers to act in their own self-interest.
A Little Evolution
This last point is even more salient when seen from an evolutionary standpoint. Today’s most successful intermediaries—be they online platforms or more traditional brick-and-mortar firms like supermarkets—mostly had to outcompete the alternative represented by disintermediated bilateral contracts.
Critics of intermediaries rarely contemplate why the app-store model outpaced the more heavily disintermediated software distribution of the desktop era. Or why hotel-booking sites exist, despite consumers’ ability to use search engines, hotel websites, and other product-search methods that offer unadulterated product selections. Or why mortgage brokers are so common when borrowers can call local banks directly. The list is endless.
Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see [other] intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.
Fiat Versus Emergent Disintermediation
All of this is not to say that intermediaries are perfect, or that centralization always beats decentralization. Instead, the critical point is about the competitive process. There are vast differences between centralization that stems from government fiat and that which emerges organically.
(Dis)intermediation is an economic good. Markets thus play a critical role in deciding how much or little of it is provided. Intermediaries must charge fees that cover their costs, while bilateral contracts entail transaction costs. In typically Hayekian fashion, suppliers and buyers will weigh the costs and benefits of these options.
Intermediaries are most likely to emerge in markets prone to excessive transaction costs and competitive processes ensure that only valuable intermediaries survive. Accordingly, there is no guarantee that government-mandated disintermediation would generate net benefits in any given case.
Of course, the market does not always work perfectly. Sometimes, market failures give rise to excessive (or insufficient) centralization. And policymakers should certainly be attentive to these potential problems and address them on a case-by-case basis. But there is little reason to believe that today’s most successful intermediaries are the result of market failures, and it is thus critical that policymakers do not undermine the valuable role they perform.
For example, few believe that supermarkets exist merely because government failures (such as excessive regulation) or market failures (such as monopolization) prevent the emergence of smaller rivals. Likewise, the app-store model is widely perceived as an improvement over previous software platforms; few consumers appear favorably disposed toward its replacement with sideloading of apps (for example, few Android users choose to sideload apps rather than purchase them via the Google Play Store). In fact, markets appear to be moving in the opposite direction: even traditional software platforms such as Windows OS increasingly rely on closed stores to distribute software on their platforms.
More broadly, this same reasoning can (and has) been applied to other social institutions, such as the modern family. For example, the late Steven Horwitz observed that family structures have evolved in order to adapt to changing economic circumstances. Crucially, this process is driven by the same cost-benefit tradeoff that we see in markets. In both cases, agents effectively decide which functions are better performed within a given social structure, and which ones are more efficiently completed outside of it.
Returning to Tyler Cowen’s point about the future of Web3, the case can be made that whatever level of centralization ultimately emerges is most likely the best case scenario. Sure, there may be some market failures and suboptimal outcomes along the way, but they ultimately pale in comparison to the most pervasive force: namely, economic agents’ ability to act in what they perceive to be their best interest. To put it differently, if Web3 spontaneously becomes as centralized as Web 2.0 has been, that would be testament to the tremendous role that intermediaries play throughout the economy.
Antitrust policymakers around the world have taken a page out of the Silicon Valley playbook and decided to “move fast and break things.” While the slogan is certainly catchy, applying it to the policymaking world is unfortunate and, ultimately, threatens to harm consumers.
Several antitrust authorities in recent months have announced their intention to block (or, at least, challenge) a spate of mergers that, under normal circumstances, would warrant only limited scrutiny and face little prospect of outright prohibition. This is notably the case of several vertical mergers, as well as mergers between firms that are only potential competitors (sometimes framed as “killer acquisitions”). These include Facebook’s acquisition of Giphy (U.K.); Nvidia’s ARM Ltd. deal (U.S., EU, and U.K.), and Illumina’s purchase of GRAIL (EU). It is also the case for horizontal mergers in non-concentrated markets, such as WarnerMedia’s proposed merger with Discovery, which has faced significant political backlash.
Some of these deals fail even to implicate “traditional” merger-notification thresholds. Facebook’s purchase of Giphy was only notifiable because of the U.K. Competition and Markets Authority’s broad interpretation of its “share of supply test” (which eschews traditional revenue thresholds). Likewise, the European Commission relied on a highly controversial interpretation of the so-called “Article 22 referral” procedure in order to review Illumina’s GRAIL purchase.
Some have praised these interventions, claiming antitrust authorities should take their chances and prosecute high-profile deals. It certainly appears that authorities are pressing their luck because they face few penalties for wrongful prosecutions. Overly aggressive merger enforcement might even reinforce their bargaining position in subsequent cases. In other words, enforcers risk imposing social costs on firms and consumers because their incentives to prosecute mergers are not aligned with those of society as a whole.
None of this should come as a surprise to anyone who has been following this space. As my ICLE colleagues and I have been arguing for quite a while, weakening the guardrails that surround merger-review proceedings opens the door to arbitrary interventions that are difficult (though certainly not impossible) to remediate before courts.
A Simplified Model of Legal Disputes
The negotiations that surround merger-review proceedings involve firms and authorities bargaining in the shadow of potential litigation. Whether and which concessions are made will depend chiefly on what the parties believe will be the outcome of litigation. If firms think courts will safeguard their merger, they will offer authorities few potential remedies. Conversely, if authorities believe courts will support their decision to block a merger, they are unlikely to accept concessions that stop short of the parties withdrawing their deal.
This simplified model suggests that neither enforcers nor merging parties are in position to “exploit” the merger-review process, so long as courts review decisions effectively. Under this model, overly aggressive enforcement would merely lead to defeat in court (and, expecting this, merging parties would offer few concessions to authorities).
Put differently, court proceedings are both a dispute-resolution mechanism and a source of rulemaking. The result is that only marginal cases should lead to actual disputes. Most harmful mergers will be deterred, and clearly beneficial ones will be cleared rapidly. So long as courts apply the consumer welfare standard consistently, firms’ merger decisions—along with any rulings or remedies—all should primarily serve consumers’ interests.
At least, that is the theory. But there are factors that can serve to undermine this efficient outcome. In the field of merger control, this is notably the case with court delays that prevent parties from effectively challenging merger decisions.
While delays between when a legal claim is filed and a judgment is rendered aren’t always detrimental (as Richard Posner observes, speed can be costly), it is essential that these delays be accounted for in any subsequent damages and penalties. Parties that prevail in court might otherwise only obtain reparations that are below the market rate, reducing the incentive to seek judicial review in the first place.
The problem is particularly acute when it comes to merger reviews. Merger challenges might lead the parties to abandon a deal because they estimate the transaction will no longer be commercially viable by the time courts have decided the matter. This is a problem, insofar as neither U.S. nor EU antitrust law generally requires authorities to compensate parties for wrongful merger decisions. For example, courts in the EU have declined to fully compensate aggrieved companies (e.g., the CFI in Schneider) and have set an exceedingly high bar for such claims to succeed at all.
In short, parties have little incentive to challenge merger decisions if the only positive outcome is for their deals to be posthumously sanctified. This smaller incentive to litigate may be insufficient to create enough cases that would potentially helpful precedent for future merging firms. Ultimately, the balance of bargaining power is tilted in favor of competition authorities.
Some Data on Mergers
While not necessarily dispositive, there is qualitative evidence to suggest that parties often drop their deals when authorities either block them (as in the EU) or challenge them in court (in the United States).
U.S. merging parties nearly always either reach a settlement or scrap their deal when their merger is challenged. There were 43 transactions challenged by either the U.S. Justice Department (15) or the Federal Trade Commission (28) in 2020. Of these, 15 were abandoned and almost all the remaining cases led to settlements.
The EU picture is similar. The European Commission blocks, on average, about one merger every year (30 over the last 31 years). Most in-depth investigations are settled in exchange for remedies offered by the merging firms (141 out of 239). While the EU does not publish detailed statistics concerning abandoned mergers, it is rare for firms to appeal merger-prohibition decisions. The European Court of Justice’s database lists only six such appeals over a similar timespan. The vast majority of blocked mergers are scrapped, with the parties declining to appeal.
This proclivity to abandon mergers is surprising, given firms’ high success rate in court. Of the six merger-annulment appeals in the ECJ’s database (CK Hutchison Holdings Ltd.’s acquisition of Telefónica Europe Plc; Ryanair’s acquisition of a controlling stake in Aer Lingus; a proposed merger between Deutsche Börse and NYSE Euronext; Tetra Laval’s takeover of Sidel Group; a merger between Schneider Electric SA and Legrand SA; and Airtours’ acquisition of First Choice) merging firms won four of them. While precise numbers are harder to come by in the United States, it is also reportedly rare for U.S. antitrust enforcers to win merger-challenge cases.
One explanation is that only marginal cases ever make it to court. In other words, firms with weak cases are, all else being equal, less likely to litigate. However, that is unlikely to explain all abandoned deals.
There are documented cases in which it was clearly delays, rather than self-selection, that caused firms to scrap planned mergers. In the EU’s Airtours proceedings, the merging parties dropped their transaction even though they went on to prevail in court (and First Choice, the target firm, was acquired by another rival). This is inconsistent with the notion that proposed mergers are abandoned only when the parties have a weak case to challenge (the Commission’s decision was widely seen as controversial).
Antitrust policymakers also generally acknowledge that mergers are often time-sensitive. That’s why merger rules on both sides of the Atlantic tend to impose strict timelines within which antitrust authorities must review deals.
In the end, if self-selection based on case strength were the only criteria merging firms used in deciding to appeal a merger challenge, one would not expect an equilibrium in which firms prevail in more than two-thirds of cases. If firms anticipated that a successful court case would preserve a multi-billion dollar merger, the relatively small burden of legal fees should not dissuade them from litigating, even if their chance of success was tiny. We would expect to see more firms losing in court.
The upshot is that antitrust challenges and prohibition decisions likely cause at least some firms to abandon their deals because court proceedings are not seen as an effective remedy. This perception, in turn, reinforces authorities’ bargaining position and thus encourages firms to offer excessive remedies in hopes of staving off lengthy litigation.
Conclusion
A general rule of policymaking is that rules should seek to ensure that agents internalize both the positive and negative effects of their decisions. This, in turn, should ensure that they behave efficiently.
In the field of merger control, those incentives are misaligned. Given the prevailing political climate on both sides of the Atlantic, challenging large corporate acquisitions likely generates important political capital for antitrust authorities. But wrongful merger prohibitions are unlikely to elicit the kinds of judicial rebukes that would compel authorities to proceed more carefully.
Put differently, in the field of antitrust law, court proceedings ought to serve as a guardrail to ensure that enforcement decisions ultimately benefit consumers. When that shield is removed, it is no longer a given that authorities—who, in theory, act as agents of society—will act in the best interests of that society, rather than maximize their own preferences.
Ideally, we should ensure that antitrust authorities bear the social costs of faulty decisions, by compensating, at least, the direct victims of their actions (i.e., the merging firms). However, this would likely require new legislation to that effect, as there currently are too many obstacles to such cases. It is thus unlikely to represent a short-term solution.
In the meantime, regulatory restraint appears to be the only realistic solution. Or, one might say, authorities should “move carefully and avoid breaking stuff.”
The European Commission and its supporters were quick to claim victory following last week’s long-awaited General Court of the European Union ruling in the Google Shoppingcase. It’s hard to fault them. The judgment is ostensibly an unmitigated win for the Commission, with the court upholding nearly every aspect of its decision.
However, the broader picture is much less rosy for both the Commission and the plaintiffs. The General Court’s ruling notably provides strong support for maintaining the current remedy package, in which rivals can bid for shopping box placement. This makes the Commission’s earlier rejection of essentially the same remedy in 2014 look increasingly frivolous. It also pours cold water on rivals’ hopes that it might be replaced with something more far-reaching.
More fundamentally, the online world continues to move further from the idealistic conception of an “open internet” that regulators remain determined to foist on consumers. Indeed, users consistently choose convenience over openness, thus rejecting the vision of online markets upon which both the Commission’s decision and the General Court’s ruling are premised.
The Google Shopping case will ultimately prove to be both a pyrrhic victory and a monument to the pitfalls of myopic intervention in digital markets.
Google’s big remedy win
The main point of law addressed in the Google Shopping ruling concerns the distinction between self-preferencing and refusals to deal. Contrary to Google’s defense, the court ruled that self-preferencing can constitute a standalone abuse of Article 102 of the Treaty on the Functioning of the European Union (TFEU). The Commission was thus free to dispense with the stringent conditions laid out in the 1998 Bronner ruling.
This undoubtedly represents an important victory for the Commission, as it will enable it to launch new proceedings against both Google and other online platforms. However, the ruling will also constrain the Commission’s available remedies, and rightly so.
The origins of the Google Shopping decision are enlightening. Several rivals sought improved access to the top of the Google Search page. The Commission was receptive to those calls, but faced important legal constraints. The natural solution would have been to frame its case as a refusal to deal, which would call for a remedy in which a dominant firm grants rivals access to its infrastructure (be it physical or virtual). But going down this path would notably have required the Commission to show that effective access was “indispensable” for rivals to compete (one of the so-called Bronner conditions)—something that was most likely not the case here.
Sensing these difficulties, the Commission framed its case in terms of self-preferencing, surmising that this would entail a much softer legal test. The General Court’s ruling vindicates this assessment (at least barring a successful appeal by Google):
240 It must therefore be concluded that the Commission was not required to establish that the conditions set out in the judgment of 26 November 1998, Bronner (C‑7/97, EU:C:1998:569), were satisfied […]. [T]he practices at issue are an independent form of leveraging abuse which involve […] ‘active’ behaviour in the form of positive acts of discrimination in the treatment of the results of Google’s comparison shopping service, which are promoted within its general results pages, and the results of competing comparison shopping services, which are prone to being demoted.
This more expedient approach, however, entails significant limits that will undercut both the Commission and rivals’ future attempts to extract more far-reaching remedies from Google.
Because the underlying harm is no longer the denial of access, but rivals being treated less favorably, the available remedies are much narrower. Google must merely ensure that it does not treat itself more preferably than rivals, regardless whether those rivals ultimately access its infrastructure and manage to compete. The General Court says this much when it explains the theory of harm in the case at hand:
287. Conversely, even if the results from competing comparison shopping services would be particularly relevant for the internet user, they can never receive the same treatment as results from Google’s comparison shopping service, whether in terms of their positioning, since, owing to their inherent characteristics, they are prone to being demoted by the adjustment algorithms and the boxes are reserved for results from Google’s comparison shopping service, or in terms of their display, since rich characters and images are also reserved to Google’s comparison shopping service. […] they can never be shown in as visible and as eye-catching a way as the results displayed in Product Universals.
Regulation 1/2003 (Art. 7.1) ensures the European Commission can only impose remedies that are “proportionate to the infringement committed and necessary to bring the infringement effectively to an end.” This has obvious ramifications for the Google Shopping remedy.
Under the remedy accepted by the Commission, Google agreed to auction off access to the Google Shopping box. Google and rivals would thus compete on equal footing to display comparison shopping results.
Rivals and their consultants decried this outcome; and Margrethe Vestager intimated the commission might review the remedy package. Both camps essentially argued the remedy did not meaningfully boost traffic to rival comparison shopping services (CSSs), because those services were not winning the best auction slots:
All comparison shopping services other than Google’s are hidden in plain sight, on a tab behind Google’s default comparison shopping page. Traffic cannot get to them, but instead goes to Google and on to merchants. As a result, traffic to comparison shopping services has fallen since the remedy—worsening the original abuse.
Or, as Margrethe Vestager put it:
We may see a show of rivals in the shopping box. We may see a pickup when it comes to clicks for merchants. But we still do not see much traffic for viable competitors when it comes to shopping comparison
But these arguments are entirely beside the point. If the infringement had been framed as a refusal to supply, it might be relevant that rivals cannot access the shopping box at what is, for them, cost-effective price. Because the infringement was framed in terms of self-preferencing, all that matters is whether Google treats itself equally.
I am not aware of a credible claim that this is not the case. At best, critics have suggested the auction mechanism favors Google because it essentially pays itself:
The auction mechanism operated by Google to determine the price paid for PLA clicks also disproportionately benefits Google. CSSs are discriminated against per clickthrough, as they are forced to cede most of their profit margin in order to successfully bid […] Google, contrary to rival CSSs, does not, in reality, have to incur the auction costs and bid away a great part of its profit margins.
But this reasoning completely omits Google’s opportunity costs. Imagine a hypothetical (and oversimplified) setting where retailers are willing to pay Google or rival CSSs 13 euros per click-through. Imagine further that rival CSSs can serve these clicks at a cost of 2 euros, compared to 3 euros for Google (excluding the auction fee). Google is less efficient in this hypothetical. In this setting, rivals should be willing to bid up to 11 euros per click (the difference between what they expect to earn and their other costs). Critics claim Google will accept to bid higher because the money it pays itself during the auction is not really a cost (it ultimately flows to Google’s pockets). That is clearly false.
To understand this, readers need only consider Google’s point of view. On the one hand, it could pay itself 11 euros (and some tiny increment) to win the auction. Its revenue per click-through would be 10 euros (13 euros per click-through, minus its cost of 3 euros). On the other hand, it could underbid rivals by a tiny increment, ensuring they bid 11 euros. When its critics argue that Google has an advantage because it pays itself, they are ultimately claiming that 10 is larger than 11.
Google’s remedy could hardly be more neutral. If it wins more auction slots than rivals CSSs, the appropriate inference should be that it is simply more efficient. Nothing in the Commission’s decision or the General Court’s ruling precludes that outcome. In short, while Google has (for the time being, at least) lost its battle to appeal the Commission’s decision, the remedy package—the same it put forward way back in 2014—has never looked stronger.
Good news for whom?
The above is mostly good news for both Google and consumers, who will be relieved that the General Court’s ruling preserves Google’s ability to show specialized boxes (of which the shopping unit is but one example). But that should not mask the tremendous downsides of both the Commission’s case and the court’s ruling.
The Commission and rivals’ misapprehensions surrounding the Google Shopping remedy, as well as the General Court’s strong stance against self-preferencing, are revealing of a broader misunderstanding about online markets that also permeates through other digital regulation initiatives like the Digital Markets Act and the American Choice and Innovation Act.
Policymakers wrongly imply that platform neutrality is a good in and of itself. They assume incumbent platforms generallyhave an incentive to favor their own services, and that preventing them from doing so is beneficial to both rivals and consumers. Yet neither of these statements is correct.
Economic research suggests self-preferencing is only harmful in exceptional circumstances. That is true of the traditional literature on platform threats (here and here), where harm is premised on the notion that rivals will use the downstream market, ultimately, to compete with an upstream incumbent. It’s also true in more recent scholarship that compares dual mode platforms to pure marketplaces and resellers, where harm hinges on a platform being able to immediately imitate rivals’ offerings. Even this ignores the significant efficiencies that might simultaneously arise from self-preferencing and closed platforms, more broadly. In short, rules that categorically prohibit self-preferening by dominant platforms overshoot the mark, and the General Court’s Google Shopping ruling is a troubling development in that regard.
It is also naïve to think that prohibiting self-preferencing will automatically benefit rivals and consumers (as opposed to harming the latter and leaving the former no better off). If self-preferencing is not anticompetitive, then propping up inefficient firms will at best be a futile exercise in preserving failing businesses. At worst, it would impose significant burdens on consumers by destroying valuable synergies between the platform and its own downstream service.
Finally, if the past years teach us anything about online markets, it is that consumers place a much heavier premium on frictionless user interfaces than on open platforms. TikTok is arguably a much more “closed” experience than other sources of online entertainment, like YouTube or Reddit (i.e. users have less direct control over their experience). Yet many observers have pinned its success, among other things, on its highly intuitive and simple interface. The emergence of Vinted, a European pre-owned goods platform, is another example of competition through a frictionless user experience.
There is a significant risk that, by seeking to boost “choice,” intervention by competition regulators against self-preferencing will ultimately remove one of the benefits users value most. By increasing the information users need to process, there is a risk that non-discrimination remedies will merely add pain points to the underlying purchasing process. In short, while Google Shopping is nominally a victory for the Commission and rivals, it is also a testament to the futility and harmfulness of myopic competition intervention in digital markets. Consumer preferences cannot be changed by government fiat, nor can the fact that certain firms are more efficient than others (at least, not without creating significant harm in the process). It is time this simple conclusion made its way into European competition thinking.
Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.
This narrative has become ubiquitous in policymaking circles. Thinking of this sort notably underpins high-profile reports on competition in digital markets (here, here, and here), as well ensuing attempts to regulate digital platforms, such as the draft American Innovation and Choice Online Act and the EU’s Digital Markets Act.
But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.
The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.
The Bertrand Paradox
In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).
Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.
By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:
There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.
This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):
If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.
This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).
But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:
On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.
All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).
The Theory of Contestable Markets
Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.
Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:
In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.
For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.
In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.
Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.
This has far-reaching implications for antitrust policy, as Baumol was quick to point out:
This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.
Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.
To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration.
How Contestable Are Digital Markets?
The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.
The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.
Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.
First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.
These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.
Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.
A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).
Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.
Conclusion
Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.
Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.
Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.
Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).
Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.
In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.
Still from Squid Game, Netflix and Siren Pictures Inc., 2021
Recent commentary on the proposed merger between WarnerMedia and Discovery, as well as Amazon’s acquisition of MGM, often has included the suggestion that the online content-creation and video-streaming markets are excessively consolidated, or that they will become so absent regulatory intervention. For example, in a recent letter to the U.S. Justice Department (DOJ), the American Antitrust Institute and Public Knowledge opine that:
Slow and inadequate oversight risks the streaming market going the same route as cable—where consumers have little power, few options, and where consolidation and concentration reign supreme. A number of threats to competition are clear, as discussed in this section, including: (1) market power issues surrounding content and (2) the role of platforms in “gatekeeping” to limit competition.
But the AAI/PK assessment overlooks key facts about the video-streaming industry, some of which suggest that, if anything, these markets currently suffer from too much fragmentation.
The problem is well-known: any individual video-streaming service will offer only a fraction of the content that viewers want, but budget constraints limit the number of services that a household can afford to subscribe to. It may be counterintuitive, but consolidation in the market for video-streaming can solve both problems at once.
One subscription is not enough
Surveys find that U.S. households currently maintain, on average, four video-streaming subscriptions. This explains why even critics concede that a plethora of streaming services compete for consumer eyeballs. For instance, the AAI and PK point out that:
Today, every major media company realizes the value of streaming and a bevy of services have sprung up to offer different catalogues of content.
These companies have challenged the market leader, Netflix and include: Prime Video (2006), Hulu (2007), Paramount+ (2014), ESPN+ (2018), Disney+ (2019), Apple TV+ (2019), HBO Max (2020), Peacock (2020), and Discovery+ (2021).
With content scattered across several platforms, multiple subscriptions are the only way for households to access all (or most) of the programs they desire. Indeed, other than price, library sizes and the availability of exclusive content are reportedly the main drivers of consumer purchase decisions.
Of course, there is nothing inherently wrong with the current equilibrium in which consumers multi-home across multiple platforms. One potential explanation is demand for high-quality exclusive content, which requires tremendous investment to develop and promote. Production costs for TV series routinely run in the tens of millions of dollars per episode (see here and here). Economic theory predicts these relationship-specific investments made by both producers and distributors will cause producers to opt for exclusive distribution or vertical integration. The most sought-after content is thus exclusive to each platform. In other words, exclusivity is likely the price that users must pay to ensure that high-quality entertainment continues to be produced.
But while this paradigm has many strengths, the ensuing fragmentation can be detrimental to consumers, as this may lead to double marginalization or mundane issues like subscription fatigue. Consolidation can be a solution to both.
Substitutes, complements, or unrelated?
As Hal Varian explains in his seminal book, the relationship between two goods can range among three extremes: perfect substitutes (i.e., two goods are perfectly interchangeable); perfect complements (i.e., there is no value to owning one good without the other); or goods that exist in independent markets (i.e., the price of one good does not affect demand for the other).
These distinctions are critical when it comes to market concentration. All else equal—which is obviously not the case in reality—increased concentration leads to lower prices for complements, and higher prices for substitutes. Finally, if demand for two goods is unrelated, then bringing them under common ownership should not affect their price.
To at least some extent, streaming services should be seen as complements rather than substitutes—or, at least, as services with unrelated demand. If they were perfect substitutes, consumers would be indifferent between two Netflix subscriptions or one Netflix plan and one Amazon Prime plan. That is obviously not the case. Nor are they perfect complements, which would mean that Netflix is worthless without Amazon Prime, Disney+, and other services.
However, there is reason to believe there exists some complementarity between streaming services, or at least that demand for them is independent. Most consumers subscribe to multiple services, and almost no one subscribes to the same service twice:
This assertion is also supported by the ubiquitous bundling of subscriptions in the cable distribution industry, which also has recently been seen in video-streaming markets. For example, in the United States, Disney+ can be purchased in a bundle with Hulu and ESPN+.
The key question is: is each service more valuable, less valuable, or as valuable in isolation than they are when bundled? If households place some additional value on having a complete video offering (one that includes child entertainment, sports, more mature content, etc.), and if they value the convenience of accessing more of their content via a single app, then we can infer these services are to some extent complementary.
Finally, it is worth noting that any complementarity between these services would be largely endogenous. If the industry suddenly switched to a paradigm of non-exclusive content—as is broadly the case for audio streaming—the above analysis would be altered (though, as explained above, such a move would likely be detrimental to users). Streaming services would become substitutes if they offered identical catalogues.
In short, the extent to which streaming services are complements ultimately boils down to an empirical question that may fluctuate with industry practices. As things stand, there is reason to believe that these services feature some complementarities, or at least that demand for them is independent. In turn, this suggests that further consolidation within the industry would not lead to price increases and may even reduce them.
Consolidation can enable price discrimination
It is well-established that bundling entertainment goods can enable firms to better engage in price discrimination, often increasing output and reducing deadweight loss in the process.
Take George Stigler’s famous explanation for the practice of “block booking,” in which movie studios sold multiple films to independent movie theatres as a unit. Stigler assumes the underlying goods are neither substitutes nor complements:
Stigler, George J. (1963) “United States v. Loew’s Inc.: A Note on Block-Booking,” Supreme Court Review: Vol. 1963 : No. 1 , Article 2.
The upshot is that, when consumer tastes for content are idiosyncratic—as is almost certainly the case for movies and television series, movies—it can counterintuitively make sense to sell differing content as a bundle. In doing so, the distributor avoids pricing consumers out of the content upon which they place a lower value. Moreover, this solution is more efficient than price discriminating on an unbundled basis, as doing so would require far more information on the seller’s part and would be vulnerable to arbitrage.
In short, bundling enables each consumer to access a much wider variety of content. This, in turn, provides a powerful rationale for mergers in the video-streaming space—particularly where they can bring together varied content libraries. Put differently, it cuts in favor of more, not less, concentration in video-streaming markets (at least, up to a certain point).
Scale-related economies
Finally, a wide array of scale-related economies further support the case for concentration in video-streaming markets. These include potential economies of scale, network effects, and reduced transaction costs.
The simplest of these ideas is that the cost of video streaming may decrease at the margin (i.e., serving each marginal viewer might be cheaper than the previous one). In other words, mergers of video-streaming services mayenable platforms to operate at a more efficient scale. There has notably been some discussion of whether Netflix benefits from scale economies of this sort. But this is, of course, ultimately an empirical question. As I have written with Geoffrey Manne, we should not assume that this is the case for all digital platforms, or that these increasing returns are present at all ranges of output.
Likewise, the fact that content can earn greater revenues by reaching a wider audience (or a greater number of small niches) may increase a producer’s incentive to create high-quality content. For example, Netflix’s recent hit series Squid Game reportedly cost $16.8 million to produce a total of nine episodes. This is significant for a Korean-language thriller. These expenditures were likely only possible because of Netflix’s vast network of viewers. Video-streaming mergers can jump-start these effects by bringing previously fragmented audiences onto a single platform.
Finally, operating at a larger scale may enable firms and consumers to economize on various transaction and search costs. For instance, consumers don’t need to manage several subscriptions, and searching for content is easier within a single ecosystem.
Conclusion
In short, critics could hardly be more wrong in assuming that consolidation in the video-streaming industry will necessarily harm consumers. To the contrary, these mergers should be presumptively welcomed because, to a first approximation, they are likely to engender lower prices and reduce deadweight loss.
Critics routinely draw parallels between video streaming and the consolidation that previously moved through the cable industry. They suggest these events as evidence that consolidation was (and still is) inefficient and exploitative of consumers. As AAI and PK frame it:
Moreover, given the broader competition challenges that reside in those markets, and the lessons learned from a failure to ensure competition in the traditional MVPD markets, enforcers should be particularly vigilant.
But while it might not have been ideal for all consumers, the comparatively laissez-faire approach to competition in the cable industry arguably facilitated the United States’ emergence as a global leader for TV programming. We are now witnessing what appears to be a similar trend in the online video-streaming market.
This is mostly a good thing. While a single streaming service might not be the optimal industry configuration from a welfare standpoint, it would be equally misguided to assume that fragmentation necessarily benefits consumers. In fact, as argued throughout this piece, there are important reasons to believe that the status quo—with at least 10 significant players—is too fragmented and that consumers would benefit from additional consolidation.
Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.
But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.
This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.
Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.
Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.
Bees
Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.
The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:
[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.
If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.
It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?
The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.
Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research:
Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.
But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:
Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.
In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.
The Lighthouse
Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.
Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:
Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.
He added that:
[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.
More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.
What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:
[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.
In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.
Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:
The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.
Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.
Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:
Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?
However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:
[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.
Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.
Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.
The Tragedy of the Commons
Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.
The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:
The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.
In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.
Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:
The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.
As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.
Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.
These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:
Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:
Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.
In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?
More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:
The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.
Dvorak Keyboards
In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.
The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:
Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]
Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.
Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:
Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.
In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.
Killzones, Zoom, and TikTok
If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.
For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:
If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.
Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.
And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).
But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.
Zoom is one of the most salient instances. As I have written previously:
To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.
Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.
More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.
While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.
In Conclusion
My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.
In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.
For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.
Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.
Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.
All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.
This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.
The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:
This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].