Archives For competition

For a potential entrepreneur, just how much time it will take to compete, and the barrier to entry that time represents, will vary greatly depending on the market he or she wishes to enter. A would-be competitor to the likes of Subway, for example, might not find the time needed to open a sandwich shop to be a substantial hurdle. Even where it does take a long time to bring a product to market, it may be possible to accelerate the timeline if the potential profits are sufficiently high. 

As Steven Salop notes in a recent paper, however, there may be cases where long periods of production time are intrinsic to a product: 

If entry takes a long time, then the fear of entry may not provide a substantial constraint on conduct. The firm can enjoy higher prices and profits until the entry occurs. Even if a strong entrant into the 12-year-old scotch market begins the entry process immediately upon announcement of the merger of its rivals, it will not be able to constrain prices for a long time. [emphasis added]

Salop’s point relates to the supply-side substitutability of Scotch whisky (sic — Scotch whisky is spelt without an “e”). That is, to borrow from the European Commission’s definition, whether “suppliers are able to switch production to the relevant products and market them in the short term.” Scotch is aged in wooden barrels for a number of years (at least three, but often longer) before being bottled and sold, and the value of Scotch usually increases with age. 

Due to this protracted manufacturing process, Salop argues, an entrant cannot compete with an incumbent dominant firm for however many years it would take to age the Scotch; they cannot produce the relevant product in the short term, no matter how high the profits collected by a monopolist are, and hence no matter how strong the incentive to enter the market. If I wanted to sell 12-year-old Scotch, to use Salop’s example, it would take me 12 years to enter the market. In the meantime, a dominant firm could extract monopoly rents, leading to higher prices for consumers. 

But can a whisky producer “enjoy higher prices and profits until … entry occurs”? A dominant firm in the 12-year-old Scotch market will not necessarily be immune to competition for the entire 12-year period it would take to produce a Scotch of the same vintage. There are various ways, both on the demand and supply side, that pressure could be brought to bear on a monopolist in the Scotch market.

One way could be to bring whiskies that are being matured for longer-maturity bottles (like 16- or 18-year-old Scotches) into service at the 12-year maturity point, shifting this supply to a market in which profits are now relatively higher. 

Alternatively, distilleries may try to produce whiskies that resemble 12-year old whiskies in flavor with younger batches. A 2013 article from The Scotsman discusses this possibility in relation to major Scottish whisky brand Macallan’s decision to switch to selling exclusively No-Age Statement (NAS — they do not bear an age on the bottle) whiskies: 

Experts explained that, for example, nine and 11-year-old whiskies—not yet ready for release under the ten and 12-year brands—could now be blended together to produce the “entry-level” Gold whisky immediately.

An aged Scotch cannot contain any whisky younger than the age stated on the bottle, but an NAS alternative can contain anything over three years (though older whiskies are often used to capture a flavor more akin to a 12-year dram). For many drinkers, NAS whiskies are a close substitute for 12-year-old whiskies. They often compete with aged equivalents on quality and flavor and can command similar prices to aged bottles in the 12-year category. More than 80% of bottles sold bear no age statement. While this figure includes non-premium bottles, the share of NAS whiskies traded at auction on the secondary market, presumably more likely to be premium, increased from 20% to 30% in the years between 2013 and 2018.

There are also whiskies matured outside of Scotland, in regions such as Taiwan and India, that can achieve flavor profiles akin to older whiskies more quickly, thanks to warmer climates and the faster chemical reactions inside barrels they cause. Further increases in maturation rate can be brought about by using smaller barrels with a higher surface-area-to-volume ratio. Whiskies matured in hotter climates and smaller barrels can be brought to market even more quickly than NAS Scotch matured in the cooler Scottish climate, and may well represent a more authentic replication of an older barrel. 

“Whiskies” that can be manufactured even more quickly may also be on the horizon. Some startups in the United States are experimenting with rapid-aging technology which would allow them to produce a whisky-like spirit in a very short amount of time. As detailed in a recent article in The Economist, Endless West in California is using technology that ages spirits within 24 hours, with the resulting bottles selling for $40 – a bit less than many 12-year-old Scotches. Although attempts to break the conventional maturation process are nothing new, recent attempts have won awards in blind taste-test competitions.

None of this is to dismiss Salop’s underlying point. But it may suggest that, even for a product where time appears to be an insurmountable barrier to entry, there may be more ways to compete than we initially assume.

Earlier this year, the International Center for Law & Economics (ICLE) hosted a conference with the Oxford Union on the themes of innovation, competition, and economic growth with some of our favorite scholars. Though attendance at the event itself was reserved for Oxford Union members, videos from that day are now available for everyone to watch.

Charles Goodhart and Manoj Pradhan on demographics and growth

Charles Goodhart, of Goodhart’s Law fame, and Manoj Pradhan discussed the relationship between demographics and growth, and argued that an aging global population could mean higher inflation and interest rates sooner than many imagine.

Catherine Tucker on privacy and innovation — is there a trade-off?

Catherine Tucker of the Massachusetts Institute of Technology discussed the costs and benefits of privacy regulation with ICLE’s Sam Bowman, and considered whether we face a trade-off between privacy and innovation online and in the fight against COVID-19.

Don Rosenberg on the political and economic challenges facing a global tech company in 2021

Qualcomm’s General Counsel Don Rosenberg, formerly of Apple and IBM, discussed the political and economic challenges facing a global tech company in 2021, as well as dealing with China while working in one of the most strategically vital industries in the world.

David Teece on the dynamic capabilities framework

David Teece explained the dynamic capabilities framework, a way of understanding business strategy and behavior in an uncertain world.

Vernon Smith in conversation with Shruti Rajagopalan on what we still have to learn from Adam Smith

Nobel laureate Vernon Smith discussed the enduring insights of Adam Smith with the Mercatus Center’s Shruti Rajagopalan.

Samantha Hoffman, Robert Atkinson and Jennifer Huddleston on American and Chinese approaches to tech policy in the 2020s

The final panel, with the Information Technology and Innovation Foundation’s President Robert Atkinson, the Australian Strategic Policy Institute’s Samantha Hoffman, and the American Action Forum’s Jennifer Huddleston, discussed the role that tech policy in the U.S. and China plays in the geopolitics of the 2020s.

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

President Joe Biden named his post-COVID-19 agenda “Build Back Better,” but his proposals to prioritize support for government-run broadband service “with less pressure to turn profits” and to “reduce Internet prices for all Americans” will slow broadband deployment and leave taxpayers with an enormous bill.

Policymakers should pay particular heed to this danger, amid news that the Senate is moving forward with considering a $1.2 trillion bipartisan infrastructure package, and that the Federal Communications Commission, the U.S. Commerce Department’s National Telecommunications and Information Administration, and the U.S. Agriculture Department’s Rural Utilities Service will coordinate on spending broadband subsidy dollars.

In order to ensure that broadband subsidies lead to greater buildout and adoption, policymakers must correctly understand the state of competition in broadband and not assume that increasing the number of firms in a market will necessarily lead to better outcomes for consumers or the public.

A recent white paper published by us here at the International Center for Law & Economics makes the case that concentration is a poor predictor of competitiveness, while offering alternative policies for reaching Americans who don’t have access to high-speed Internet service.

The data show that the state of competition in broadband is generally healthy. ISPs routinely invest billions of dollars per year in building, maintaining, and upgrading their networks to be faster, more reliable, and more available to consumers. FCC data show that average speeds available to consumers, as well as the number of competitors providing higher-speed tiers, have increased each year. And prices for broadband, as measured by price-per-Mbps, have fallen precipitously, dropping 98% over the last 20 years. None of this makes sense if the facile narrative about the absence of competition were true.

In our paper, we argue that the real public policy issue for broadband isn’t curbing the pursuit of profits or adopting price controls, but making sure Americans have broadband access and encouraging adoption. In areas where it is very costly to build out broadband networks, like rural areas, there tend to be fewer firms in the market. But having only one or two ISPs available is far less of a problem than having none at all. Understanding the underlying market conditions and how subsidies can both help and hurt the availability and adoption of broadband is an important prerequisite to good policy.

The basic problem is that those who have decried the lack of competition in broadband often look at the number of ISPs in a given market to determine whether a market is competitive. But this is not how economists think of competition. Instead, economists look at competition as a dynamic process where changes in supply and demand factors are constantly pushing the market toward new equilibria.

In general, where a market is “contestable”—that is, where existing firms face potential competition from the threat of new entry—even just a single existing firm may have to act as if it faces vigorous competition. Such markets often have characteristics (e.g., price, quality, and level of innovation) similar or even identical to those with multiple existing competitors. This dynamic competition, driven by changes in technology or consumer preferences, ensures that such markets are regularly disrupted by innovative products and services—a process that does not always favor incumbents.

Proposals focused on increasing the number of firms providing broadband can actually reduce consumer welfare. Whether through overbuilding—by allowing new private entrants to free-ride on the initial investment by incumbent companies—or by going into the Internet business itself through municipal broadband, government subsidies can increase the number of firms providing broadband. But it can’t do so without costs―which include not just the cost of the subsidies themselves, which ultimately come from taxpayers, but also the reduced incentives for unsubsidized private firms to build out broadband in the first place.

If underlying supply and demand conditions in rural areas lead to a situation where only one provider can profitably exist, artificially adding another completely reliant on subsidies will likely just lead to the exit of the unsubsidized provider. Or, where a community already has municipal broadband, it is unlikely that a private ISP will want to enter and compete with a firm that doesn’t have to turn a profit.

A much better alternative for policymakers is to increase the demand for buildout through targeted user subsidies, while reducing regulatory barriers to entry that limit supply.

For instance, policymakers should consider offering connectivity vouchers to unserved households in order to stimulate broadband deployment and consumption. Current subsidy programs rely largely on subsidizing the supply side, but this requires the government to determine the who and where of entry. Connectivity vouchers would put the choice in the hands of consumers, while encouraging more buildout to areas that may currently be uneconomic to reach due to low population density or insufficient demand due to low adoption rates.

Local governments could also facilitate broadband buildout by reducing unnecessary regulatory barriers. Local building codes could adopt more connection-friendly standards. Local governments could also reduce the cost of access to existing poles and other infrastructure. Eligible Telecommunications Carrier (ETC) requirements could also be eliminated, because they deter potential providers from seeking funds for buildout (and don’t offer countervailing benefits).

Albert Einstein once said: “if I were given one hour to save the planet, I would spend 59 minutes defining the problem, and one minute resolving it.” When it comes to encouraging broadband buildout, policymakers should make sure they are solving the right problem. The problem is that the cost of building out broadband to unserved areas is too high or the demand too low—not that there are too few competitors.

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

PHOTO: C-Span

Lina Khan’s appointment as chair of the Federal Trade Commission (FTC) is a remarkable accomplishment. At 32 years old, she is the youngest chair ever. Her longstanding criticisms of the Consumer Welfare Standard and alignment with the neo-Brandeisean school of thought make her appointment a significant achievement for proponents of those viewpoints. 

Her appointment also comes as House Democrats are preparing to mark up five bills designed to regulate Big Tech and, in the process, vastly expand the FTC’s powers. This expansion may combine with Khan’s appointment in ways that lawmakers considering the bills have not yet considered.

This is a critical time for the FTC. It has lost a number of high-profile lawsuits and is preparing to expand its rulemaking powers to regulate things like employment contracts and businesses’ use of data. Khan has also argued in favor of additional rulemaking powers around “unfair methods of competition.”

As things stand, the FTC under Khan’s leadership is likely to push for more extensive regulatory powers, akin to those held by the Federal Communications Commission (FCC). But these expansions would be trivial compared to what is proposed by many of the bills currently being prepared for a June 23 mark-up in the House Judiciary Committee. 

The flagship bill—Rep. David Cicilline’s (D-R.I.) American Innovation and Choice Online Act—is described as a platform “non-discrimination” bill. I have already discussed what the real-world effects of this bill would likely be. Briefly, it would restrict platforms’ ability to offer richer, more integrated services at all, since those integrations could be challenged as “discrimination” at the cost of would-be competitors’ offerings. Things like free shipping on Amazon Prime, pre-installed apps on iPhones, or even including links to Gmail and Google Calendar at the top of a Google Search page could be precluded under the bill’s terms; in each case, there is a potential competitor being undermined. 

In fact, the bill’s scope is so broad that some have argued that the FTC simply would not challenge “innocuous self-preferencing” like, say, Apple pre-installing Apple Music on iPhones. Economist Hal Singer has defended the proposals on the grounds that, “Due to limited resources, not all platform integration will be challenged.” 

But this shifts the focus to the FTC itself, and implies that it would have potentially enormous discretionary power under these proposals to enforce the law selectively. 

Companies found guilty of breaching the bill’s terms would be liable for civil penalties of up to 15 percent of annual U.S. revenue, a potentially significant sum. And though the Supreme Court recently ruled unanimously against the FTC’s powers to levy civil fines unilaterally—which the FTC opposed vociferously, and may get restored by other means—there are two scenarios through which it could end up getting extraordinarily extensive control over the platforms covered by the bill.

The first course is through selective enforcement. What Singer above describes as a positive—the fact that enforcers would just let “benign” violations of the law be—would mean that the FTC itself would have tremendous scope to choose which cases it brings, and might do so for idiosyncratic, politicized reasons.

This approach is common in countries with weak rule of law. Anti-corruption laws are frequently used to punish opponents of the regime in China, who probably are also corrupt, but are prosecuted because they have challenged the regime in some way. Hong Kong’s National Security law has also been used to target peaceful protestors and critical media thanks to its vague and overly broad drafting. 

Obviously, that’s far more sinister than what we’re talking about here. But these examples highlight how excessively broad laws applied at the enforcer’s discretion give broad powers to the enforcer to penalize defendants for other, unrelated things. Or, to quote Jay-Z: “Am I under arrest or should I guess some more? / ‘Well, you was doing 55 in a 54.’

The second path would be to use these powers as leverage to get broad consent decrees to govern the conduct of covered platforms. These occur when a lawsuit is settled, with the defendant company agreeing to change its business practices under supervision of the plaintiff agency (in this case, the FTC). The Cambridge Analytica lawsuit ended this way, with Facebook agreeing to change its data-sharing practices under the supervision of the FTC. 

This path would mean the FTC creating bespoke, open-ended regulation for each covered platform. Like the first path, this could create significant scope for discretionary decision-making by the FTC and potentially allow FTC officials to impose their own, non-economic goals on these firms. And it would require costly monitoring of each firm subject to bespoke regulation to ensure that no breaches of that regulation occurred.

Khan, as a critic of the Consumer Welfare Standard, believes that antitrust ought to be used to pursue non-economic objectives, including “the dispersion of political and economic control.” She, and the FTC under her, may wish to use this discretionary power to prosecute firms that she feels are hurting society for unrelated reasons, such as because of political stances they have (or have not) taken.

Khan’s fellow commissioner, Rebecca Kelly Slaughter, has argued that antitrust should be “antiracist”; that “as long as Black-owned businesses and Black consumers are systematically underrepresented and disadvantaged, we know our markets are not fair”; and that the FTC should consider using its existing rulemaking powers to address racist practices. These may be desirable goals, but their application would require contentious value judgements that lawmakers may not want the FTC to make.

Khan herself has been less explicit about the goals she has in mind, but has given some hints. In her essay “The Ideological Roots of America’s Market Power Problem”, Khan highlights approvingly former Associate Justice William O. Douglas’s account of:

“economic power as inextricably political. Power in industry is the power to steer outcomes. It grants outsized control to a few, subjecting the public to unaccountable private power—and thereby threatening democratic order. The account also offers a positive vision of how economic power should be organized (decentralized and dispersed), a recognition that forms of economic power are not inevitable and instead can be restructured.” [italics added]

Though I have focused on Cicilline’s flagship bill, others grant significant new powers to the FTC, as well. The data portability and interoperability bill doesn’t actually define what “data” is; it leaves it to the FTC to “define the term ‘data’ for the purpose of implementing and enforcing this Act.” And, as I’ve written elsewhere, data interoperability needs significant ongoing regulatory oversight to work at all, a responsibility that this bill also hands to the FTC. Even a move as apparently narrow as data portability will involve a significant expansion of the FTC’s powers and give it a greater role as an ongoing economic regulator.

It is concerning enough that this legislative package would prohibit conduct that is good for consumers, and that actually increases the competition faced by Big Tech firms. Congress should understand that it also gives extensive discretionary powers to an agency intent on using them to pursue broad, political goals. If Khan’s appointment as chair was a surprise, what her FTC does with the new powers given to her by Congress should not be.

In its June 21 opinion in NCAA v. Alston, a unanimous U.S. Supreme Court affirmed the 9th U.S. Circuit Court of Appeals and thereby upheld a district court injunction finding unlawful certain National Collegiate Athletic Association (NCAA) rules limiting the education-related benefits schools may make available to student athletes. The decision will come as no surprise to antitrust lawyers who heard the oral argument; the NCAA was portrayed as a monopsony cartel whose rules undermined competition by restricting compensation paid to athletes.

Alas, however, Alston demonstrates that seemingly “good facts” (including an apparently Scrooge-like defendant) can make very bad law. While superficially appearing to be a relatively straightforward application of Sherman Act rule of reason principles, the decision fails to come to grips with the relationship of the restraints before it to the successful provision of the NCAA’s joint venture product – amateur intercollegiate sports. What’s worse, Associate Justice Brett Kavanaugh’s concurring opinion further muddies the court’s murky jurisprudential waters by signaling his view that the NCAA’s remaining compensation rules are anticompetitive and could be struck down in an appropriate case (“it is not clear how the NCAA can defend its remaining compensation rules”). Prospective plaintiffs may be expected to take the hint.

The Court’s Flawed Analysis

I previously commented on this then-pending case a few months ago:

In sum, the claim that antitrust may properly be applied to combat the alleged “exploitation” of college athletes by NCAA compensation regulations does not stand up to scrutiny. The NCAA’s rules that define the scope of amateurism may be imperfect, but there is no reason to think that empowering federal judges to second guess and reformulate NCAA athletic compensation rules would yield a more socially beneficial (let alone optimal) outcome. (Believing that the federal judiciary can optimally reengineer core NCAA amateurism rules is a prime example of the Nirvana fallacy at work.)  Furthermore, a Supreme Court decision affirming the 9th Circuit could do broad mischief by undermining case law that has accorded joint venturers substantial latitude to design the core features of their collective enterprise without judicial second-guessing.

Unfortunately, my concerns about a Supreme Court affirmance of the 9th Circuit were realized. Associate Justice Neil Gorsuch’s opinion for the court in Alston manifests a blinkered approach to the NCAA “monopsony” joint venture. To be sure, it cites and briefly discusses key Supreme Court joint venture holdings, including 2006’s Texaco v. Dagher. Nonetheless, it gives short shrift to the efficiency-based considerations that counsel presumptive deference to joint venture design rules that are key to the nature of a joint venture’s product.  

As a legal matter, the court felt obliged to defer to key district court findings not contested by the NCAA—including that the NCAA enjoys “monopsony power” in the student athlete labor market, and that the NCAA’s restrictions in fact decrease student athlete compensation “below the competitive level.”

However, even conceding these points, the court could have, but did not, take note of and assess the role of the restrictions under review in helping engender the enormous consumer benefits the NCAA confers upon consumers of its collegiate sports product. There is good reason to view those restrictions as an effort by the NCAA to address a negative externality that could diminish the attractiveness of the NCAA’s product for ultimate consumers, a result that would in turn reduce inter-brand competition.

As the amicus brief by antitrust economists (“Antitrust Economists Brief”) pointed out:

[T]he NCAA’s consistent and growing popularity reflects a product—”amateur sports” played by students and identified with the academic tradition—that continues to generate enormous consumer interest. Moreover, it appears without dispute that the NCAA, while in control of the design of its own athletic products, has preserved their integrity as amateur sports, notwithstanding the commercial success of some of them, particularly Division I basketball and Football Subdivision football. . . . Over many years, the NCAA has continually adjusted its eligibility and participation rules to prevent colleges from pursuing their own interests—which certainly can involve “pay to play”—in ways that would conflict with the procompetitive aims of the collaboration. In this sense, the NCAA’s amateurism rules are a classic example of addressing negative externalities and free riding that often are inherent or arise in the collaboration context.

The use of contractual restrictions (vertical restraints) to counteract free riding and other negative externalities generated in manufacturer-distributor interactions are well-recognized by antitrust courts. Although the restraints at issue in NCAA (and many other joint venture situations) are horizontal in nature, not vertical, they may be just as important as other nonstandard contracts in aligning the incentives of member institutions to best satisfy ultimate consumers. Satisfying consumers, in turn, enhances inter-brand competition between the NCAA’s product and other rival forms of entertainment, including professional sports offerings.

Alan Meese made a similar point in a recent paper (discussing a possible analytical framework for the court’s then-imminent Alston analysis):

[U]nchecked bidding for the services of student athletes could result in a market failure and suboptimal product quality, proof that the restraint reduces student athlete compensation below what an unbridled market would produce should not itself establish a prima facie case. Such evidence would instead be equally consistent with a conclusion that the restraint eliminates this market failure and restores compensation to optimal levels.

The court’s failure to address the externality justification was compounded by its handling of the rule of reason. First, in rejecting a truncated rule of reason with an initial presumption that the NCAA’s restraints involving student compensation are procompetitive, the court accepted that the NCAA’s monopsony power showed that its restraints “can (and in fact do) harm competition.” This assertion ignored the efficiency justification discussed above. As the Antitrust Economists’ Brief emphasized: 

[A]cting more like regulators, the lower courts treated the NCAA’s basic product design as inherently anticompetitive [so did the Supreme Court], pushing forward with a full rule of reason that sent the parties into a morass of inquiries that were not (and were never intended to be) structured to scrutinize basic product design decisions and their hypothetical alternatives. Because that inquiry was unrestrained and untethered to any input or output restraint, the application of the rule of reason in this case necessarily devolved into a quasi-regulatory inquiry, which antitrust law eschews.

Having decided that a “full” rule of reason analysis is appropriate, the Supreme Court, in effect, imposed a “least restrictive means” test on the restrictions under review, while purporting not to do so. (“We agree with the NCAA’s premise that antitrust law does not require businesses to use anything like the least restrictive means of achieving legitimate business purposes.”) The court concluded that “it was only after finding the NCAA’s restraints ‘patently and inexplicably stricter than is necessary’ to achieve the procompetitive benefits the league had demonstrated that the district court proceeded to declare a violation of the Sherman Act.” Effectively, however, this statement deferred to the lower court’s second-guessing of the means employed by the NCAA to preserve consumer demand, which the lower court did without any empirical basis.

The Supreme Court also approved the district court’s rejection of the NCAA’s view of what amateurism requires. It stressed the district court’s findings that “the NCAA’s rules and restrictions on compensation have shifted markedly over time” (seemingly a reasonable reaction to changes in market conditions) and that the NCAA developed the restrictions at issue without any reference to “considerations of consumer demand” (a de facto regulatory mandate directed at the NCAA). The Supreme Court inexplicably dubbed these lower court actions “a straightforward application of the rule of reason.” These actions seem more like blind deference to rather arbitrary judicial second-guessing of the expert party with the greatest interest in satisfying consumer demand.

The Supreme Court ended its misbegotten commentary on “less restrictive alternatives” by first claiming that it agreed that “antitrust courts must give wide berth to business judgments before finding liability.” The court asserted that the district court honored this and other principles of judicial humility because it enjoined restraints on education-related benefits “only after finding that relaxing these restrictions would not blur the distinction between college and professional sports and thus impair demand – and only finding that this course represented a significantly (not marginally) less restrictive means of achieving the same procompetitive benefits as the NCAA’s current rules.” This lower court finding once again was not based on an empirical analysis of procompetitive benefits under different sets of rules. It was little more than the personal opinion of a judge, who lacked the NCAA’s knowledge of relevant markets and expertise. That the Supreme Court accepted it as an exercise in restrained judicial analysis is well nigh inexplicable.

The Antitrust Economists’ Brief, unlike the Supreme Court, enunciated the correct approach to judicial rewriting of core NCAA joint venture rules:

The institutions that are members of the NCAA want to offer a particular type of athletic product—an amateur athletic product that they believe is consonant with their primary academic missions. By doing so, as th[e] [Supreme] Court has [previously] recognized [in its 1984 NCAA v. Board of Regents decision], they create a differentiated offering that widens consumer choice and enhances opportunities for student-athletes. NCAA, 468 U.S. at 102. These same institutions have drawn lines that they believe balance their desire to foster intercollegiate athletic competition with their overarching academic missions. Both the district court and the Ninth Circuit have now said that they may not do so, unless they draw those lines differently. Yet neither the district court nor the Ninth Circuit determined that the lines drawn reduce the output of intercollegiate athletics or ascertained whether their judicially-created lines would expand that output. That is not the function of antitrust courts, but of legislatures.                                                                                                   

Other Harms the Court Failed to Consider                    

Finally, the court failed to consider other harms that stem from a presumptive suspicion of NCAA restrictions on athletic compensation in general. The elimination of compensation rules should favor large well-funded athletic programs over others, potentially undermining “competitive balance” among schools. (Think of an NCAA March Madness tournament where “Cinderella stories” are eliminated, as virtually all the talented players have been snapped up by big name schools.) It could also, through the reallocation of income to “big name big sports” athletes who command a bidding premium, potentially reduce funding support for “minor college sports” that provide opportunities to a wide variety of student-athletes. This would disadvantage those athletes, undermine the future of “minor” sports, and quite possibly contribute to consumer disillusionment and unhappiness (think of the millions of parents of “minor sports” athletes).

What’s more, the existing rules allow many promising but non-superstar athletes to develop their skills over time, enhancing their ability to eventually compete at the professional level. (This may even be the case for some superstars, who may obtain greater long-term financial rewards by refining their talents and showcasing their skills for a year or two in college.) In addition, the current rules climate allows many student athletes who do not turn professional to develop personal connections that serve them well in their professional and personal lives, including connections derived from the “brand” of their university. (Think of wealthy and well-connected alumni who are ardent fans of their colleges’ athletic programs.) In a world without NCAA amateurism rules, the value of these experiences and connections could wither, to the detriment of athletes and consumers alike. (Consistent with my conclusion, economists Richard McKenzie and Dwight Lee have argued against the proposition that “college athletes are materially ‘underpaid’ and are ‘exploited’”.)   

This “parade of horribles” might appear unlikely in the short term. Nevertheless, in the course of time, the inability of the NCAA to control the attributes of its product, due to a changed legal climate, make it all too real. This is especially the case in light of Justice Kavanaugh’s strong warning that other NCAA compensation restrictions are likely indefensible. (As he bluntly put it, venerable college sports “traditions alone cannot justify the NCAA’s decision to build a massive money-raising enterprise on the backs of student athletes who are not fairly compensated. . . . The NCAA is not above the law.”)

Conclusion

The Supreme Court’s misguided Alston decision fails to weigh the powerful efficiency justifications for the NCAA’s amateurism rules. This holding virtually invites other lower courts to ignore efficiencies and to second guess decisions that go to the heart of the NCAA’s joint venture product offering. The end result is likely to reduce consumer welfare and, quite possibly, the welfare of many student athletes as well. One would hope that Congress, if it chooses to address NCAA rules, will keep these dangers well in mind. A statutory change not directed solely at the NCAA, creating a rebuttable presumption of legality for restraints that go to the heart of a lawful joint venture, may merit serious consideration.   

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.

Critics of big tech companies like Google and Amazon are increasingly focused on the supposed evils of “self-preferencing.” This refers to when digital platforms like Amazon Marketplace or Google Search, which connect competing services with potential customers or users, also offer (and sometimes prioritize) their own in-house products and services. 

The objection, raised by several members and witnesses during a Feb. 25 hearing of the House Judiciary Committee’s antitrust subcommittee, is that it is unfair to third parties that use those sites to allow the site’s owner special competitive advantages. Is it fair, for example, for Amazon to use the data it gathers from its service to design new products if third-party merchants can’t access the same data? This seemingly intuitive complaint was the basis for the European Commission’s landmark case against Google

But we cannot assume that something is bad for competition just because it is bad for certain competitors. A lot of unambiguously procompetitive behavior, like cutting prices, also tends to make life difficult for competitors. The same is true when a digital platform provides a service that is better than alternatives provided by the site’s third-party sellers. 

It’s probably true that Amazon’s access to customer search and purchase data can help it spot products it can undercut with its own versions, driving down prices. But that’s not unusual; most retailers do this, many to a much greater extent than Amazon. For example, you can buy AmazonBasics batteries for less than half the price of branded alternatives, and they’re pretty good.

There’s no doubt this is unpleasant for merchants that have to compete with these offerings. But it is also no different from having to compete with more efficient rivals who have lower costs or better insight into consumer demand. Copying products and seeking ways to offer them with better features or at a lower price, which critics of self-preferencing highlight as a particular concern, has always been a fundamental part of market competition—indeed, it is the primary way competition occurs in most markets. 

Store-branded versions of iPhone cables and Nespresso pods are certainly inconvenient for those companies, but they offer consumers cheaper alternatives. Where such copying may be problematic (say, by deterring investments in product innovations), the law awards and enforces patents and copyrights to reward novel discoveries and creative works, and trademarks to protect brand identity. But in the absence of those cases where a company has intellectual property, this is simply how competition works. 

The fundamental question is “what benefits consumers?” Services like Yelp object that they cannot compete with Google when Google embeds its Google Maps box in Google Search results, while Yelp cannot do the same. But for users, the Maps box adds valuable information to the results page, making it easier to get what they want. Google is not making Yelp worse by making its own product better. Should it have to refrain from offering services that benefit its users because doing so might make competing products comparatively less attractive?

Self-preferencing also enables platforms to promote their offerings in other markets, which is often how large tech companies compete with each other. Amazon has a photo-hosting app that competes with Google Photos and Apple’s iCloud. It recently emailed its customers to promote it. That is undoubtedly self-preferencing, since other services cannot market themselves to Amazon’s customers like this, but if it makes customers aware of an alternative they might not have otherwise considered, that is good for competition. 

This kind of behavior also allows companies to invest in offering services inexpensively, or for free, that they intend to monetize by preferencing their other, more profitable products. For example, Google invests in Android’s operating system and gives much of it away for free precisely because it can encourage Android customers to use the profitable Google Search service. Despite claims to the contrary, it is difficult to see this sort of cross-subsidy as harmful to consumers.

Self-preferencing can even be good for competing services, including third-party merchants. In many cases, it expands the size of their potential customer base. For example, blockbuster video games released by Sony and Microsoft increase demand for games by other publishers because they increase the total number of people who buy Playstations and Xboxes. This effect is clear on Amazon’s Marketplace, which has grown enormously for third-party merchants even as Amazon has increased the number of its own store-brand products on the site. By making the Amazon Marketplace more attractive, third-party sellers also benefit.

All platforms are open or closed to varying degrees. Retail “platforms,” for example, exist on a spectrum on which Craigslist is more open and neutral than eBay, which is more so than Amazon, which is itself relatively more so than, say, Walmart.com. Each position on this spectrum offers its own benefits and trade-offs for consumers. Indeed, some customers’ biggest complaint against Amazon is that it is too open, filled with third parties who leave fake reviews, offer counterfeit products, or have shoddy returns policies. Part of the role of the site is to try to correct those problems by making better rules, excluding certain sellers, or just by offering similar options directly. 

Regulators and legislators often act as if the more open and neutral, the better, but customers have repeatedly shown that they often prefer less open, less neutral options. And critics of self-preferencing frequently find themselves arguing against behavior that improves consumer outcomes, because it hurts competitors. But that is the nature of competition: what’s good for consumers is frequently bad for competitors. If we have to choose, it’s consumers who should always come first.

In current discussions of technology markets, few words are heard more often than “platform.” Initial public offering (IPO) prospectuses use “platform” to describe a service that is bound to dominate a digital market. Antitrust regulators use “platform” to describe a service that dominates a digital market or threatens to do so. In either case, “platform” denotes power over price. For investors, that implies exceptional profits; for regulators, that implies competitive harm.

Conventional wisdom holds that platforms enjoy high market shares, protected by high barriers to entry, which yield high returns. This simple logic drives the market’s attribution of dramatically high valuations to dramatically unprofitable businesses and regulators’ eagerness to intervene in digital platform markets characterized by declining prices, increased convenience, and expanded variety, often at zero out-of-pocket cost. In both cases, “burning cash” today is understood as the path to market dominance and the ability to extract a premium from consumers in the future.

This logic is usually wrong. 

The Overlooked Basics of Platform Economics

To appreciate this perhaps surprising point, it is necessary to go back to the increasingly overlooked basics of platform economics. A platform can refer to any service that matches two complementary populations. A search engine matches advertisers with consumers, an online music service matches performers and labels with listeners, and a food-delivery service matches restaurants with home diners. A platform benefits everyone by facilitating transactions that otherwise might never have occurred.

A platform’s economic value derives from its ability to lower transaction costs by funneling a multitude of individual transactions into a single convenient hub.  In pursuit of minimum costs and maximum gains, users on one side of the platform will tend to favor the most popular platforms that offer the largest number of users on the other side of the platform. (There are partial exceptions to this rule when users value being matched with certain typesof other users, rather than just with more users.) These “network effects” mean that any successful platform market will always converge toward a handful of winners. This positive feedback effect drives investors’ exuberance and regulators’ concerns.

There is a critical point, however, that often seems to be overlooked.

Market share only translates into market power to the extent the incumbent is protected against entry within some reasonable time horizon.  If Warren Buffett’s moat requirement is not met, market share is immaterial. If XYZ.com owns 100% of the online pet food delivery market but entry costs are asymptotic, then market power is negligible. There is another important limiting principle. In platform markets, the depth of the moat depends not only on competitors’ costs to enter the market, but users’ costs in switching from one platform to another or alternating between multiple platforms. If users can easily hop across platforms, then market share cannot confer market power given the continuous threat of user defection. Put differently: churn limits power over price.

Contrary to natural intuitions, this is why a platform market consisting of only a few leaders can still be intensely competitive, keeping prices low (down to and including $0) even if the number of competitors is low. It is often asserted, however, that users are typically locked into the dominant platform and therefore face high switching costs, which therefore implicitly satisfies the moat requirement. If that is true, then the “high churn” scenario is a theoretical curiosity and a leading platform’s high market share would be a reliable signal of market power. In fact, this common assumption likely describes the atypical case. 

AWS and the Cloud Data-Storage Market

This point can be illustrated by considering the cloud data-storage market. This would appear to be an easy case where high switching costs (due to the difficulty in shifting data among storage providers) insulate the market leader against entry threats. Yet the real world does not conform to these expectations. 

While Amazon Web Services pioneered the $100 billion-plus market and is still the clear market leader, it now faces vigorous competition from Microsoft Azure, Google Cloud, and other data-storage or other cloud-related services. This may reflect the fact that the data storage market is far from saturated, so new users are up for grabs and existing customers can mitigate lock-in by diversifying across multiple storage providers. Or it may reflect the fact that the market’s structure is fluid as a function of technological changes, enabling entry at formerly bundled portions of the cloud data-services package. While it is not always technologically feasible, the cloud storage market suggests that users’ resistance to platform capture can represent a competitive opportunity for entrants to challenge dominant vendors on price, quality, and innovation parameters.

The Surprising Instability of Platform Dominance

The instability of leadership positions in the cloud storage market is not exceptional. 

Consider a handful of once-powerful platforms that were rapidly dethroned once challenged by a more efficient or innovative rival: Yahoo and Alta Vista in the search-engine market (displaced by Google); Netscape in the browser market (displaced by Microsoft’s Internet Explorer, then displaced by Google Chrome); Nokia and then BlackBerry in the mobile wireless-device market (displaced by Apple and Samsung); and Friendster in the social-networking market (displaced by Myspace, then displaced by Facebook). AOL was once thought to be indomitable; now it is mostly referenced as a vintage email address. The list could go on.

Overestimating platform dominance—or more precisely, assuming platform dominance without close factual inquiry—matters because it promotes overestimates of market power. That, in turn, cultivates both market and regulatory bubbles: investors inflate stock valuations while regulators inflate the risk of competitive harm. 

DoorDash and the Food-Delivery Services Market

Consider the DoorDash IPO that launched in early December 2020. The market’s current approximately $50 billion valuation of a business that has been almost consistently unprofitable implicitly assumes that DoorDash will maintain and expand its position as the largest U.S. food-delivery platform, which will then yield power over price and exceptional returns for investors. 

There are reasons to be skeptical. Even where DoorDash captures and holds a dominant market share in certain metropolitan areas, it still faces actual and potential competition from other food-delivery services, in-house delivery services (especially by well-resourced national chains), and grocery and other delivery services already offered by regional and national providers. There is already evidence of these expected responses to DoorDash’s perceived high delivery fees, a classic illustration of the disciplinary effect of competitive forces on the pricing choices of an apparently dominant market leader. These “supply-side” constraints imposed by competitors are compounded by “demand-side” constraints imposed by customers. Home diners incur no more than minimal costs when swiping across food-delivery icons on a smartphone interface, casting doubt that high market share is likely to translate in this context into market power.

Deliveroo and the Costs of Regulatory Autopilot

Just as the stock market can suffer from delusions of platform grandeur, so too some competition regulators appear to have fallen prey to the same malady. 

A vivid illustration is provided by the 2019 decision by the Competition Markets Authority (CMA), the British competition regulator, to challenge Amazon’s purchase of a 16% stake in Deliveroo, one of three major competitors in the British food-delivery services market. This intervention provides perhaps the clearest illustration of policy action based on a reflexive assumption of market power, even in the face of little to no indication that the predicate conditions for that assumption could plausibly be satisfied.

Far from being a dominant platform, Deliveroo was (and is) a money-losing venture lagging behind money-losing Just Eat (now Just Eat Takeaway) and Uber Eats in the U.K. food-delivery services market. Even Amazon had previously closed its own food-delivery service in the U.K. due to lack of profitability. Despite Deliveroo’s distressed economic circumstances and the implausibility of any market power arising from Amazon’s investment, the CMA nonetheless elected to pursue the fullest level of investigation. While the transaction was ultimately approved in August 2020, this intervention imposed a 15-month delay and associated costs in connection with an investment that almost certainly bolstered competition in a concentrated market by funding a firm reportedly at risk of insolvency.  This is the equivalent of a competition regulator driving in reverse.

Concluding Thoughts

There seems to be an increasingly common assumption in commentary by the press, policymakers, and even some scholars that apparently dominant platforms usually face little competition and can set, at will, the terms of exchange. For investors, this is a reason to buy; for regulators, this is a reason to intervene. This assumption is sometimes realized, and, in that case, antitrust intervention is appropriate whenever there is reasonable evidence that market power is being secured through something other than “competition on the merits.” However, several conditions must be met to support the market power assumption without which any such inquiry would be imprudent. Contrary to conventional wisdom, the economics and history of platform markets suggest that those conditions are infrequently satisfied.

Without closer scrutiny, reflexively equating market share with market power is prone to lead both investors and regulators astray.  

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

The U.S. Department of Justice’s (DOJ) antitrust case against Google, which was filed in October 2020, will be a tough slog.[1] It is an alleged monopolization (Sherman Act, Sec. 2) case; and monopolization cases are always a tough slog.

In this brief essay I will lay out some of the issues in the case and raise an intriguing possibility.

What is the case about?

The case is about exclusivity and exclusion in the distribution of search engine services; that Google paid substantial sums to Apple and to the manufacturers of Android-based mobile phones and tablets and also to wireless carriers and web-browser proprietors—in essence, to distributors—to install the Google search engine as the exclusive pre-set (installed), default search program. The suit alleges that Google thereby made it more difficult for other search-engine providers (e.g., Bing; DuckDuckGo) to obtain distribution for their search-engine services and thus to attract search-engine users and to sell the online advertising that is associated with search-engine use and that provides the revenue to support the search “platform” in this “two-sided market” context.[2]

Exclusion can be seen as a form of “raising rivals’ costs.”[3]  Equivalently, exclusion can be seen as a form of non-price predation. Under either interpretation, the exclusionary action impedes competition.

It’s important to note that these allegations are different from those that motivated an investigation by the Federal Trade Commission (which the FTC dropped in 2013) and the cases by the European Union against Google.[4]  Those cases focused on alleged self-preferencing; that Google was unduly favoring its own products and services (e.g., travel services) in its delivery of search results to users of its search engine. In those cases, the impairment of competition (arguably) happens with respect to those competing products and services, not with respect to search itself.

What is the relevant market?

For a monopolization allegation to have any meaning, there needs to be the exercise of market power (which would have adverse consequences for the buyers of the product). And in turn, that exercise of market power needs to occur in a relevant market: one in which market power can be exercised.

Here is one of the important places where the DOJ’s case is likely to turn into a slog: the delineation of a relevant market for alleged monopolization cases remains as a largely unsolved problem for antitrust economics.[5]  This is in sharp contrast to the issue of delineating relevant markets for the antitrust analysis of proposed mergers.  For this latter category, the paradigm of the “hypothetical monopolist” and the possibility that this hypothetical monopolist could prospectively impose a “small but significant non-transitory increase in price” (SSNIP) has carried the day for the purposes of market delineation.

But no such paradigm exists for monopolization cases, in which the usual allegation is that the defendant already possesses market power and has used the exclusionary actions to buttress that market power. To see the difficulties, it is useful to recall the basic monopoly diagram from Microeconomics 101. A monopolist faces a negatively sloped demand curve for its product (at higher prices, less is bought; at lower prices, more is bought) and sets a profit-maximizing price at the level of output where its marginal revenue (MR) equals its marginal costs (MC). Its price is thereby higher than an otherwise similar competitive industry’s price for that product (to the detriment of buyers) and the monopolist earns higher profits than would the competitive industry.

But unless there are reliable benchmarks as to what the competitive price and profits would otherwise be, any information as to the defendant’s price and profits has little value with respect to whether the defendant already has market power. Also, a claim that a firm does not have market power because it faces rivals and thus isn’t able profitably to raise its price from its current level (because it would lose too many sales to those rivals) similarly has no value. Recall the monopolist from Micro 101. It doesn’t set a higher price than the one where MR=MC, because it would thereby lose too many sales to other sellers of other things.

Thus, any firm—regardless of whether it truly has market power (like the Micro 101 monopolist) or is just another competitor in a sea of competitors—should have already set its price at its profit-maximizing level and should find it unprofitable to raise its price from that level.[6]  And thus the claim, “Look at all of the firms that I compete with!  I don’t have market power!” similarly has no informational value.

Let us now bring this problem back to the Google monopolization allegation:  What is the relevant market?  In the first instance, it has to be “the provision of answers to user search queries.” After all, this is the “space” in which the exclusion occurred. But there are categories of search: e.g., search for products/services, versus more general information searches (“What is the current time in Delaware?” “Who was the 21st President of the United States?”). Do those separate categories themselves constitute relevant markets?

Further, what would the exercise of market power in a (delineated relevant) market look like?  Higher-than-competitive prices for advertising that targets search-results recipients is one obvious answer (but see below). In addition, because this is a two-sided market, the competitive “price” (or prices) might involve payments by the search engine to the search users (in return for their exposure to the lucrative attached advertising).[7]  And product quality might exhibit less variety than a competitive market would provide; and/or the monopolistic average level of quality would be lower than in a competitive market: e.g., more abuse of user data, and/or deterioration of the delivered information itself, via more self-preferencing by the search engine and more advertising-driven preferencing of results.[8]

In addition, a natural focus for a relevant market is the advertising that accompanies the search results. But now we are at the heart of the difficulty of delineating a relevant market in a monopolization context. If the relevant market is “advertising on search engine results pages,” it seems highly likely that Google has market power. If the relevant market instead is all online U.S. advertising (of which Google’s revenue share accounted for 32% in 2019[9]), then the case is weaker; and if the relevant market is all advertising in the United States (which is about twice the size of online advertising[10]), the case is weaker still. Unless there is some competitive benchmark, there is no easy way to delineate the relevant market.[11]

What exactly has Google been paying for, and why?

As many critics of the DOJ’s case have pointed out, it is extremely easy for users to switch their default search engine. If internet search were a normal good or service, this ease of switching would leave little room for the exercise of market power. But in that case, why is Google willing to pay $8-$12 billion annually for the exclusive default setting on Apple devices and large sums to the manufacturers of Android-based devices (and to wireless carriers and browser proprietors)? Why doesn’t Google instead run ads in prominent places that remind users how superior Google’s search results are and how easy it is for users (if they haven’t already done so) to switch to the Google search engine and make Google the user’s default choice?

Suppose that user inertia is important. Further suppose that users generally have difficulty in making comparisons with respect to the quality of delivered search results. If this is true, then being the default search engine on Apple and Android-based devices and on other distribution vehicles would be valuable. In this context, the inertia of their customers is a valuable “asset” of the distributors that the distributors may not be able to take advantage of, but that Google can (by providing search services and selling advertising). The question of whether Google’s taking advantage of this user inertia means that Google exercises market power takes us back to the issue of delineating the relevant market.

There is a further wrinkle to all of this. It is a well-understood concept in antitrust economics that an incumbent monopolist will be willing to pay more for the exclusive use of an essential input than a challenger would pay for access to the input.[12] The basic idea is straightforward. By maintaining exclusive use of the input, the incumbent monopolist preserves its (large) monopoly profits. If the challenger enters, the incumbent will then earn only its share of the (much lower, more competitive) duopoly profits. Similarly, the challenger can expect only the lower duopoly profits. Accordingly, the incumbent should be willing to outbid (and thereby exclude) the challenger and preserve the incumbent’s exclusive use of the input, so as to protect those monopoly profits.

To bring this to the Google monopolization context, if Google does possess market power in some aspect of search—say, because online search-linked advertising is a relevant market—then Google will be willing to outbid Microsoft (which owns Bing) for the “asset” of default access to Apple’s (inertial) device owners. That Microsoft is a large and profitable company and could afford to match (or exceed) Google’s payments to Apple is irrelevant. If the duopoly profits for online search-linked advertising would be substantially lower than Google’s current profits, then Microsoft would not find it worthwhile to try to outbid Google for that default access asset.

Alternatively, this scenario could be wholly consistent with an absence of market power. If search users (who can easily switch) consider Bing to be a lower-quality search service, then large payments by Microsoft to outbid Google for those exclusive default rights would be largely wasted, since the “acquired” default search users would quickly switch to Google (unless Microsoft provided additional incentives for the users not to switch).

But this alternative scenario returns us to the original puzzle:  Why is Google making such large payments to the distributors for those exclusive default rights?

An intriguing possibility

Consider the following possibility. Suppose that Google was paying that $8-$12 billion annually to Apple in return for the understanding that Apple would not develop its own search engine for Apple’s device users.[13] This possibility was not raised in the DOJ’s complaint, nor is it raised in the subsequent suits by the state attorneys general.

But let’s explore the implications by going to an extreme. Suppose that Google and Apple had a formal agreement that—in return for the $8-$12 billion per year—Apple would not develop its own search engine. In this event, this agreement not to compete would likely be seen as a violation of Section 1 of the Sherman Act (which does not require a market delineation exercise) and Apple would join Google as a co-conspirator. The case would take on the flavor of the FTC’s prosecution of “pay-for-delay” agreements between the manufacturers of patented pharmaceuticals and the generic drug manufacturers that challenge those patents and then receive payments from the former in return for dropping the patent challenge and delaying the entry of the generic substitute.[14]

As of this writing, there is no evidence of such an agreement and it seems quite unlikely that there would have been a formal agreement. But the DOJ will be able to engage in discovery and take depositions. It will be interesting to find out what the relevant executives at Google—and at Apple—thought was being achieved by those payments.

What would be a suitable remedy/relief?

The DOJ’s complaint is vague with respect to the remedy that it seeks. This is unsurprising. The DOJ may well want to wait to see how the case develops and then amend its complaint.

However, even if Google’s actions have constituted monopolization, it is difficult to conceive of a suitable and effective remedy. One apparently straightforward remedy would be to require simply that Google not be able to purchase exclusivity with respect to the pre-set default settings. In essence, the device manufacturers and others would always be able to sell parallel default rights to other search engines: on the basis, say, that the default rights for some categories of customers—or even a percentage of general customers (randomly selected)—could be sold to other search-engine providers.

But now the Gilbert-Newbery insight comes back into play. Suppose that a device manufacturer knows (or believes) that Google will pay much more if—even in the absence of any exclusivity agreement—Google ends up being the pre-set search engine for all (or nearly all) of the manufacturer’s device sales, as compared with what the manufacturer would receive if those default rights were sold to multiple search-engine providers (including, but not solely, Google). Can that manufacturer (recall that the distributors are not defendants in the case) be prevented from making this sale to Google and thus (de facto) continuing Google’s exclusivity?[15]

Even a requirement that Google not be allowed to make any payment to the distributors for a default position may not improve the competitive environment. Google may be able to find other ways of making indirect payments to distributors in return for attaining default rights, e.g., by offering them lower rates on their online advertising.

Further, if the ultimate goal is an efficient outcome in search, it is unclear how far restrictions on Google’s bidding behavior should go. If Google were forbidden from purchasing any default installation rights for its search engine, would (inert) consumers be better off? Similarly, if a distributor were to decide independently that its customers were better served by installing the Google search engine as the default, would that not be allowed? But if it is allowed, how could one be sure that Google wasn’t indirectly paying for this “independent” decision (e.g., through favorable advertising rates)?

It’s important to remember that this (alleged) monopolization is different from the Standard Oil case of 1911 or even the (landline) AT&T case of 1984. In those cases, there were physical assets that could be separated and spun off to separate companies. For Google, physical assets aren’t important. Although it is conceivable that some of Google’s intellectual property—such as Gmail, YouTube, or Android—could be spun off to separate companies, doing so would do little to cure the (arguably) fundamental problem of the inert device users.

In addition, if there were an agreement between Google and Apple for the latter not to develop a search engine, then large fines for both parties would surely be warranted. But what next? Apple can’t be forced to develop a search engine.[16] This differentiates such an arrangement from the “pay-for-delay” arrangements for pharmaceuticals, where the generic manufacturers can readily produce a near-identical substitute for the patented drug and are otherwise eager to do so.

At the end of the day, forbidding Google from paying for exclusivity may well be worth trying as a remedy. But as the discussion above indicates, it is unlikely to be a panacea and is likely to require considerable monitoring for effective enforcement.

Conclusion

The DOJ’s case against Google will be a slog. There are unresolved issues—such as how to delineate a relevant market in a monopolization case—that will be central to the case. Even if the DOJ is successful in showing that Google violated Section 2 of the Sherman Act in monopolizing search and/or search-linked advertising, an effective remedy seems problematic. But there also remains the intriguing question of why Google was willing to pay such large sums for those exclusive default installation rights?

The developments in the case will surely be interesting.


[1] The DOJ’s suit was joined by 11 states.  More states subsequently filed two separate antitrust lawsuits against Google in December.

[2] There is also a related argument:  That Google thereby gained greater volume, which allowed it to learn more about its search users and their behavior, and which thereby allowed it to provide better answers to users (and thus a higher-quality offering to its users) and better-targeted (higher-value) advertising to its advertisers.  Conversely, Google’s search-engine rivals were deprived of that volume, with the mirror-image negative consequences for the rivals.  This is just another version of the standard “learning-by-doing” and the related “learning curve” (or “experience curve”) concepts that have been well understood in economics for decades.

[3] See, for example, Steven C. Salop and David T. Scheffman, “Raising Rivals’ Costs: Recent Advances in the Theory of Industrial Structure,” American Economic Review, Vol. 73, No. 2 (May 1983), pp.  267-271; and Thomas G. Krattenmaker and Steven C. Salop, “Anticompetitive Exclusion: Raising Rivals’ Costs To Achieve Power Over Price,” Yale Law Journal, Vol. 96, No. 2 (December 1986), pp. 209-293.

[4] For a discussion, see Richard J. Gilbert, “The U.S. Federal Trade Commission Investigation of Google Search,” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 489-513.

[5] For a more complete version of the argument that follows, see Lawrence J. White, “Market Power and Market Definition in Monopolization Cases: A Paradigm Is Missing,” in Wayne D. Collins, ed., Issues in Competition Law and Policy. American Bar Association, 2008, pp. 913-924.

[6] The forgetting of this important point is often termed “the cellophane fallacy”, since this is what the U.S. Supreme Court did in a 1956 antitrust case in which the DOJ alleged that du Pont had monopolized the cellophane market (and du Pont, in its defense claimed that the relevant market was much wider: all flexible wrapping materials); see U.S. v. du Pont, 351 U.S. 377 (1956).  For an argument that profit data and other indicia argued for cellophane as the relevant market, see George W. Stocking and Willard F. Mueller, “The Cellophane Case and the New Competition,” American Economic Review, Vol. 45, No. 1 (March 1955), pp. 29-63.

[7] In the context of differentiated services, one would expect prices (positive or negative) to vary according to the quality of the service that is offered.  It is worth noting that Bing offers “rewards” to frequent searchers; see https://www.microsoft.com/en-us/bing/defaults-rewards.  It is unclear whether this pricing structure of payment to Bing’s customers represents what a more competitive framework in search might yield, or whether the payment just indicates that search users consider Bing to be a lower-quality service.

[8] As an additional consequence of the impairment of competition in this type of search market, there might be less technological improvement in the search process itself – to the detriment of users.

[9] As estimated by eMarketer: https://www.emarketer.com/newsroom/index.php/google-ad-revenues-to-drop-for-the-first-time/.

[10] See https://www.visualcapitalist.com/us-advertisers-spend-20-years/.

[11] And, again, if we return to the du Pont cellophane case:  Was the relevant market cellophane?  Or all flexible wrapping materials?

[12] This insight is formalized in Richard J. Gilbert and David M.G. Newbery, “Preemptive Patenting and the Persistence of Monopoly,” American Economic Review, Vol. 72, No. 3 (June 1982), pp. 514-526.

[13] To my knowledge, Randal C. Picker was the first to suggest this possibility; see https://www.competitionpolicyinternational.com/a-first-look-at-u-s-v-google/.  Whether Apple would be interested in trying to develop its own search engine – given the fiasco a decade ago when Apple tried to develop its own maps app to replace the Google maps app – is an open question.  In addition, the Gilbert-Newbery insight applies here as well:  Apple would be less inclined to invest the substantial resources that would be needed to develop a search engine when it is thereby in a duopoly market.  But Google might be willing to pay “insurance” to reinforce any doubts that Apple might have.

[14] The U.S. Supreme Court, in FTC v. Actavis, 570 U.S. 136 (2013), decided that such agreements could be anti-competitive and should be judged under the “rule of reason”.  For a discussion of the case and its implications, see, for example, Joseph Farrell and Mark Chicu, “Pharmaceutical Patents and Pay-for-Delay: Actavis (2013),” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 331-353.

[15] This is an example of the insight that vertical arrangements – in this case combined with the Gilbert-Newbery effect – can be a way for dominant firms to raise rivals’ costs.  See, for example, John Asker and Heski Bar-Isaac. 2014. “Raising Retailers’ Profits: On Vertical Practices and the Exclusion of Rivals.” American Economic Review, Vol. 104, No. 2 (February 2014), pp. 672-686.

[16] And, again, for the reasons discussed above, Apple might not be eager to make the effort.