Archives For google

A bipartisan group of senators unveiled legislation today that would dramatically curtail the ability of online platforms to “self-preference” their own services—for example, when Apple pre-installs its own Weather or Podcasts apps on the iPhone, giving it an advantage that independent apps don’t have. The measure accompanies a House bill that included similar provisions, with some changes.

1. The Senate bill closely resembles the House version, and the small improvements will probably not amount to much in practice.

The major substantive changes we have seen between the House bill and the Senate version are:

  1. Violations in Section 2(a) have been modified to refer only to conduct that “unfairly” preferences, limits, or discriminates between the platform’s products and others, and that “materially harm[s] competition on the covered platform,” rather than banning all preferencing, limits, or discrimination.
  2. The evidentiary burden required throughout the bill has been changed from  “clear and convincing” to a “preponderance of evidence” (in other words, greater than 50%).
  3. An affirmative defense has been added to permit a platform to escape liability if it can establish that challenged conduct that “was narrowly tailored, was nonpretextual, and was necessary to… maintain or enhance the core functionality of the covered platform.”
  4. The minimum market capitalization for “covered platforms” has been lowered from $600 billion to $550 billion.
  5. The Senate bill would assess fines of 15% of revenues from the period during which the conduct occurred, in contrast with the House bill, which set fines equal to the greater of either 15% of prior-year revenues or 30% of revenues from the period during which the conduct occurred.
  6. Unlike the House bill, the Senate bill does not create a private right of action. Only the U.S. Justice Department (DOJ), Federal Trade Commission (FTC), and state attorneys-generals could bring enforcement actions on the basis of the bill.

Item one here certainly mitigates the most extreme risks of the House bill, which was drafted, bizarrely, to ban all “preferencing” or “discrimination” by platforms. If that were made law, it could literally have broken much of the Internet. The softened language reduces that risk somewhat.

However, Section 2(b), which lists types of conduct that would presumptively establish a violation under Section 2(a), is largely unchanged. As outlined here, this would amount to a broad ban on a wide swath of beneficial conduct. And “unfair” and “material” are notoriously slippery concepts. As a practical matter, their inclusion here may not significantly alter the course of enforcement under the Senate legislation from what would ensue under the House version.

Item three, which allows challenged conduct to be defended if it is “necessary to… maintain or enhance the core functionality of the covered platform,” may also protect some conduct. But because the bill requires companies to prove that challenged conduct is not only beneficial, but necessary to realize those benefits, it effectively implements a “guilty until proven innocent” standard that is likely to prove impossible to meet. The threat of permanent injunctions and enormous fines will mean that, in many cases, companies simply won’t be able to justify the expense of endeavoring to improve even the “core functionality” of their platforms in any way that could trigger the bill’s liability provisions. Thus, again, as a practical matter, the difference between the Senate and House bills may be only superficial.

The effect of this will likely be to diminish product innovation in these areas, because companies could not know in advance whether the benefits of doing so would be worth the legal risk. We have previously highlighted existing conduct that may be lost if a bill like this passes, such as pre-installation of apps or embedding maps and other “rich” results in boxes on search engine results pages. But the biggest loss may be things we don’t even know about yet, that just never happen because the reward from experimentation is not worth the risk of being found to be “discriminating” against a competitor.

We dove into the House bill in Breaking Down the American Choice and Innovation Online Act and Breaking Down House Democrats’ Forthcoming Competition Bills.

2. The prohibition on “unfair self-preferencing” is vague and expansive and will make Google, Amazon, Facebook, and Apple’s products worse. Consumers don’t want digital platforms to be dumb pipes, or to act like a telephone network or sewer system. The Internet is filled with a superabundance of information and options, as well as a host of malicious actors. Good digital platforms act as middlemen, sorting information in useful ways and taking on some of the risk that exists when, inevitably, we end up doing business with untrustworthy actors.

When users have the choice, they tend to prefer platforms that do quite a bit of “discrimination”—that is, favoring some sellers over others, or offering their own related products or services through the platform. Most people prefer Amazon to eBay because eBay is chaotic and riskier to use.

Competitors that decry self-preferencing by the largest platforms—integrating two different products with each other, like putting a maps box showing only the search engine’s own maps on a search engine results page—argue that the conduct is enabled only by a platform’s market dominance and does not benefit consumers.

Yet these companies often do exactly the same thing in their own products, regardless of whether they have market power. Yelp includes a map on its search results page, not just restaurant listings. DuckDuckGo does the same. If these companies offer these features, it is presumably because they think their users want such results. It seems perfectly plausible that Google does the same because it thinks its users—literally the same users, in most cases—also want them.

Fundamentally, and as we discuss in Against the Vertical Disrcimination Presumption, there is simply no sound basis to enact such a bill (even in a slightly improved version):

The notion that self-preferencing by platforms is harmful to innovation is entirely speculative. Moreover, it is flatly contrary to a range of studies showing that the opposite is likely true. In reality, platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.

We discussed self-preferencing further in Platform Self-Preferencing Can Be Good for Consumers and Even Competitors, and showed that platform “discrimination” is often what consumers want from digital platforms in On the Origin of Platforms: An Evolutionary Perspective.

3. The bill massively empowers an FTC that seems intent to use antitrust to achieve political goals. The House bill would enable competitors to pepper covered platforms with frivolous lawsuits. The bill’s sponsors presumably hope that removing the private right of action will help to avoid that. But the bill still leaves intact a much more serious risk to the rule of law: the bill’s provisions are so broad that federal antitrust regulators will have enormous discretion over which cases they take.

This means that whoever is running the FTC and DOJ will be able to threaten covered platforms with a broad array of lawsuits, potentially to influence or control their conduct in other, unrelated areas. While some supporters of the bill regard this as a positive, most antitrust watchers would greet this power with much greater skepticism. Fundamentally, both bills grant antitrust enforcers wildly broad powers to pursue goals unrelated to competition. FTC Chair Lina Khan has, for example, argued that “the dispersion of political and economic control” ought to be antitrust’s goal. Commissioner Rebecca Kelly-Slaughter has argued that antitrust should be “antiracist”.

Whatever the desirability of these goals, the broad discretionary authority the bills confer on the antitrust agencies means that individual commissioners may have significantly greater scope to pursue the goals that they believe to be right, rather than Congress.

See discussions of this point at What Lina Khan’s Appointment Means for the House Antitrust Bills, Republicans Should Tread Carefully as They Consider ‘Solutions’ to Big Tech, The Illiberal Vision of Neo-Brandeisian Antitrust, and Alden Abbott’s discussion of FTC Antitrust Enforcement and the Rule of Law.

4. The bill adopts European principles of competition regulation. These are, to put it mildly, not obviously conducive to the sort of innovation and business growth that Americans may expect. Europe has no tech giants of its own, a condition that shows little sign of changing. Apple, alone, is worth as much as the top 30 companies in Germany’s DAX index, and the top 40 in France’s CAC index. Landmark European competition cases have seen Google fined for embedding Shopping results in the Search page—not because it hurt consumers, but because it hurt competing pricecomparison websites.

A fundamental difference between American and European competition regimes is that the U.S. system is far more friendly to businesses that obtain dominant market positions because they have offered better products more cheaply. Under the American system, successful businesses are normally given broad scope to charge high prices and refuse to deal with competitors. This helps to increase the rewards and incentive to innovate and invest in order to obtain that strong market position. The European model is far more burdensome.

The Senate bill adopts a European approach to refusals to deal—the same approach that led the European Commission to fine Microsoft for including Windows Media Player with Windows—and applies it across Big Tech broadly. Adopting this kind of approach may end up undermining elements of U.S. law that support innovation and growth.

For more, see How US and EU Competition Law Differ.

5. The proposals are based on a misunderstanding of the state of competition in the American economy, and of antitrust enforcement. It is widely believed that the U.S. economy has seen diminished competition. This is mistaken, particularly with respect to digital markets. Apparent rises in market concentration and profit margins disappear when we look more closely: local-level concentration is falling even as national-level concentration is rising, driven by more efficient chains setting up more stores in areas that were previously served by only one or two firms.

And markup rises largely disappear after accounting for fixed costs like R&D and marketing.

Where profits are rising, in areas like manufacturing, it appears to be mainly driven by increased productivity, not higher prices. Real prices have not risen in line with markups. Where profitability has increased, it has been mainly driven by falling costs.

Nor have the number of antitrust cases brought by federal antitrust agencies fallen. The likelihood of a merger being challenged more than doubled between 1979 and 2017. And there is little reason to believe that the deterrent effect of antitrust has weakened. Many critics of Big Tech have decided that there must be a problem and have worked backwards from that conclusion, selecting whatever evidence supports it and ignoring the evidence that does not. The consequence of such motivated reasoning is bills like this.

See Geoff’s April 2020 written testimony to the House Judiciary Investigation Into Competition in Digital Markets here.

A lawsuit filed by the State of Texas and nine other states in December 2020 alleges, among other things, that Google has engaged in anticompetitive conduct related to its online display-advertising business.

Broadly, the Texas complaint (previously discussed in this TOTM symposium) alleges that Google possesses market power in ad-buying tools and in search, illustrated in the figure below.

The complaint also alleges anticompetitive conduct by Google with respect to YouTube in a separate “inline video-advertising market.” According to the complaint, this market power is leveraged to force transactions through Google’s exchange, AdX, and its network, Google Display Network. The leverage is further exercised by forcing publishers to license Google’s ad server, Google Ad Manager.

Although the Texas complaint raises many specific allegations, the key ones constitute four broad claims: 

  1. Google forces publishers to license Google’s ad server and trade in Google’s ad exchange;
  2. Google uses its control over publishers’ inventory to block exchange competition;
  3. Google has disadvantaged technology known as “header bidding” in order to prevent publishers from accessing its competitors; and
  4. Google prevents rival ad-placement services from competing by not allowing them to buy YouTube ad space.

Alleged harms

The Texas complaint alleges Google’s conduct has caused harm to competing networks, exchanges, and ad servers. The complaint also claims that the plaintiff states’ economies have been harmed “by depriving the Plaintiff States and the persons within each Plaintiff State of the benefits of competition.”

In a nod to the widely accepted Consumer Welfare Standard, the Texas complaint alleges harm to three categories of consumers:

  1. Advertisers who pay for their ads to be displayed, but should be paying less;
  2. Publishers who are paid to provide space on their sites to display ads, but should be paid more; and
  3. Users who visit the sites, view the ads, and purchase or use the advertisers’ and publishers’ products and services.

The complaint claims users are harmed by above-competitive prices paid by advertisers, in that these higher costs are passed on in the form of higher prices and lower quality for the products and services they purchase from those advertisers. The complaint simultaneously claims that users are harmed by the below-market prices received by publishers in the form of “less content (lower output of content), lower-quality content, less innovation in content delivery, more paywalls, and higher subscription fees.”

Without saying so explicitly, the complaint insinuates that if intermediaries (e.g., Google and competing services) charged lower fees for their services, advertisers would pay less, publishers would be paid more, and consumers would be better off in the form of lower prices and better products from advertisers, as well as improved content and lower fees on publishers’ sites.

Effective competition is not an antitrust offense

A flawed premise underlies much of the Texas complaint. It asserts that conduct by a dominant incumbent firm that makes competition more difficult for competitors is inherently anticompetitive, even if that conduct confers benefits on users.

This amounts to a claim that Google is acting anti-competitively by innovating and developing products and services to benefit one or more display-advertising constituents (e.g., advertisers, publishers, or consumers) or by doing things that benefit the advertising ecosystem more generally. These include creating new and innovative products, lowering prices, reducing costs through vertical integration, or enhancing interoperability.

The argument, which is made explicitly elsewhere, is that Google must show that it has engineered and implemented its products to minimize obstacles its rivals face, and that any efficiencies created by its products must be shown to outweigh the costs imposed by those improvements on the company’s competitors.

Similarly, claims that Google has acted in an anticompetitive fashion rest on the unsupportable notion that the company acts unfairly when it designs products to benefit itself without considering how those designs would affect competitors. Google could, it is argued, choose alternate arrangements and practices that would possibly confer greater revenue on publishers or lower prices on advertisers without imposing burdens on competitors.

For example, a report published by the Omidyar Network sketching a “roadmap” for a case against Google claims that, if Google’s practices could possibly be reimagined to achieve the same benefits in ways that foster competition from rivals, then the practices should be condemned as anticompetitive:

It is clear even to us as lay people that there are less anticompetitive ways of delivering effective digital advertising—and thereby preserving the substantial benefits from this technology—than those employed by Google.

– Fiona M. Scott Morton & David C. Dinielli, “Roadmap for a Digital Advertising Monopolization Case Against Google”

But that’s not how the law—or the economics—works. This approach converts beneficial aspects of Google’s ad-tech business into anticompetitive defects, essentially arguing that successful competition and innovation create barriers to entry that merit correction through antitrust enforcement.

This approach turns U.S. antitrust law (and basic economics) on its head. As some of the most well-known words of U.S. antitrust jurisprudence have it:

A single producer may be the survivor out of a group of active competitors, merely by virtue of his superior skill, foresight and industry. In such cases a strong argument can be made that, although, the result may expose the public to the evils of monopoly, the Act does not mean to condemn the resultant of those very forces which it is its prime object to foster: finis opus coronat. The successful competitor, having been urged to compete, must not be turned upon when he wins.

– United States v. Aluminum Co. of America, 148 F.2d 416 (2d Cir. 1945)

U.S. antitrust law is intended to foster innovation that creates benefits for consumers, including innovation by incumbents. The law does not proscribe efficiency-enhancing unilateral conduct on the grounds that it might also inconvenience competitors, or that there is some other arrangement that could be “even more” competitive. Under U.S. antitrust law, firms are “under no duty to help [competitors] survive or expand.”  

To be sure, the allegations against Google are couched in terms of anticompetitive effect, rather than being described merely as commercial disagreements over the distribution of profits. But these effects are simply inferred, based on assumptions that Google’s vertically integrated business model entails an inherent ability and incentive to harm rivals.

The Texas complaint claims Google can surreptitiously derive benefits from display advertisers by leveraging its search-advertising capabilities, or by “withholding YouTube inventory,” rather than altruistically opening Google Search and YouTube up to rival ad networks. The complaint alleges Google uses its access to advertiser, publisher, and user data to improve its products without sharing this data with competitors.

All these charges may be true, but they do not describe inherently anticompetitive conduct. Under U.S. law, companies are not obliged to deal with rivals and certainly are not obliged to do so on those rivals’ preferred terms

As long ago as 1919, the U.S. Supreme Court held that:

In the absence of any purpose to create or maintain a monopoly, the [Sherman Act] does not restrict the long recognized right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.

– United States v. Colgate & Co.

U.S. antitrust law does not condemn conduct on the basis that an enforcer (or a court) is able to identify or hypothesize alternative conduct that might plausibly provide similar benefits at lower cost. In alleging that there are ostensibly “better” ways that Google could have pursued its product design, pricing, and terms of dealing, both the Texas complaint and Omidyar “roadmap” assert that, had the firm only selected a different path, an alternative could have produced even more benefits or an even more competitive structure.

The purported cure of tinkering with benefit-producing unilateral conduct by applying an “even more competition” benchmark is worse than the supposed disease. The adjudicator is likely to misapply such a benchmark, deterring the very conduct the law seeks to promote.

For example, Texas complaint alleges: “Google’s ad server passed inside information to Google’s exchange and permitted Google’s exchange to purchase valuable impressions at artificially depressed prices.” The Omidyar Network’s “roadmap” claims that “after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. Low prices for this service can force rivals to depart, thereby directly reducing competition.”

In contrast, as current U.S. Supreme Court Associate Justice Stephen Breyer once explained, in the context of above-cost low pricing, “the consequence of a mistake here is not simply to force a firm to forego legitimate business activity it wishes to pursue; rather, it is to penalize a procompetitive price cut, perhaps the most desirable activity (from an antitrust perspective) that can take place in a concentrated industry where prices typically exceed costs.”  That commentators or enforcers may be able to imagine alternative or theoretically more desirable conduct is beside the point.

It has been reported that the U.S. Justice Department (DOJ) may join the Texas suit or bring its own similar action against Google in the coming months. If it does, it should learn from the many misconceptions and errors in the Texas complaint that leave it on dubious legal and economic grounds.

Policymakers’ recent focus on how Big Tech should be treated under antitrust law has been accompanied by claims that companies like Facebook and Google hold dominant positions in various “markets.” Notwithstanding the tendency to conflate whether a firm is large with whether it hold a dominant position, we must first answer the question most of these claims tend to ignore: “dominant over what?”

For example, as set out in this earlier Truth on the Market post, a recent lawsuit filed by various states and the U.S. Justice Department outlined five areas related to online display advertising over which Google is alleged by the plaintiffs to hold a dominant position. But crucially, none appear to have been arrived at via the application of economic reasoning.

As that post explained, other forms of advertising (such as online search and offline advertising) might form part of a “relevant market” (i.e., the market in which a product actually competes) over which Google’s alleged dominance should be assessed. The post makes a strong case for the actual relevant market being much broader than that claimed in the lawsuit. Of course, some might disagree with that assessment, so it is useful to step back and examine the principles that underlie and motivate how a relevant market is defined.

In any antitrust case, defining the relevant market should be regarded as a means to an end, not an end in itself. While such definitions provide the basis to calculate market shares, the process of thinking about relevant markets also should provide a framework to consider and highlight important aspects of the case. The process enables one to think about how a particular firm and market operates, the constraints that it and rival firms face, and whether entry by other firms is feasible or likely.

Many naïve attempts to define the relevant market will limit their analysis to a particular industry. But an industry could include too few competitors, or it might even include too many—for example, if some firms in the industry generate products that do not constitute strong competitive constraints. If one were to define all cars as the “relevant” market, that would imply that a Dacia Sandero (a supermini model produced Renault’s Romanian subsidiary Dacia) constrains the price of Maserati’s Quattroporte luxury sports sedan as much as the Ferrari Portofino grand touring sports car does. This is very unlikely to hold in reality.[1]

The relevant market should be the smallest possible group of products and services that contains all such products and services that could provide a reasonable competitive constraint. But that, of course, merely raises the question of what is meant by a “reasonable competitive constraint.” Thankfully, by applying economic reasoning, we can answer that question.

More specifically, we have the “hypothetical monopolist test.” This test operates by considering whether a hypothetical monopolist (i.e., a single firm that controlled all the products considered part of the relevant market) could profitably undertake “a small but significant, non-transitory, increase in price” (typically shortened as the SSNIP test).[2]

If the hypothetical monopolist could profitably implement this increase in price, then the group of products under consideration is said to constitute a relevant market. On the other hand, if the hypothetical monopolist could not profitably increase the price of that group of products (due to demand-side or supply-side constraints on their ability to increase prices), then that group of products is not a relevant market, and more products need to be included in the candidate relevant market. The process of widening the group of products continues until the hypothetical monopolist could profitably increase prices over that group.

So how does this test work in practice? Let’s use an example to make things concrete. In particular, let’s focus on Google’s display advertising, as that has been a significant focus of attention. Starting from the narrowest possible market, Google’s own display advertising, the HM test would ask whether a hypothetical monopolist controlling these services (and just these services) could profitably increase prices of these services permanently by 5% to 10%.

At this initial stage, it is important to avoid the “cellophane fallacy,” in which a monopolist firm could not profitably increase its prices by 5% to 10% because it is already charging the monopoly price. This fallacy usually arises in situations where the product under consideration has very few (if any) substitutes. But as has been shown here, there are already plenty of alternatives to Google’s display-advertising services, so we can be reasonably confident that the fallacy does not apply here.

We would then consider what is likely to happen if Google were to increase the prices of its online display advertising services by 5% to 10%. Given the plethora of other options (such as Microsoft, Facebook, and Simpli.fi) customers have for obtaining online display ads, a sufficiently high number of Google’s customers are likely to switch away, such that the price increase would not be profitable. It is therefore necessary to expand the candidate relevant market to include those closest alternatives to which Google’s customers would switch.

We repeat the exercise, but now with the hypothetical monopolist also increasing the prices of those newly included products. It might be the case that alternatives such as online search ads (as opposed to display ads), print advertising, TV advertising and/or other forms of advertising would sufficiently constrain the hypothetical monopolist in this case that those other alternatives form part of the relevant market.

In determining whether an alternative sufficiently constrains our hypothetical monopolist, it is important to consider actual consumer/firm behavior, rather than relying on products having “similar” characteristics. Although constraints can come from either the demand side (i.e., customers switching to another provider) or the supply side (entry/switching by other providers to start producing the products offered by the HM), for market-definition purposes, it is almost always demand-side switching that matters most. Switching by consumers tends to happen much more quickly than does switching by providers, such that it can be a more effective constraint. (Note that supply-side switching is still important when assessing overall competitive constraints, but because such switching can take one or more years, it is usually considered in the overall competitive assessment, rather than at the market-definition stage.)

Identifying which alternatives consumers do and would switch to therefore highlights the rival products and services that constrain the candidate hypothetical monopolist. It is only once the hypothetical monopolist test has been completed and the relevant market has been found that market shares can be calculated.[3]

It is at that point than an assessment of a firm’s alleged market power (or of a proposed merger) can proceed. This is why claims that “Facebook is a monopolist” or that “Google has market power” often fail at the first hurdle (indeed, in the case of Facebook, they recently have.)

Indeed, I would go so far as to argue that any antitrust claim that does not first undertake a market-definition exercise with sound economic reasoning akin to that described above should be discounted and ignored.


[1] Some might argue that there is a “chain of substitution” from the Maserati to, for example, an Audi A4, to a Ford Focus, to a Mini, to a Dacia Sandero, such that the latter does, indeed, provide some constraint on the former. However, the size of that constraint is likely to be de minimis, given how many “links” there are in that chain.

[2] The “small but significant” price increase is usually taken to be between 5% and 10%.

[3] Even if a product or group of products ends up excluded from the definition of the relevant market, these products can still form a competitive constraint in the overall assessment and are still considered at that point.

Digital advertising is the economic backbone of the Internet. It allows websites and apps to monetize their userbase without having to charge them fees, while the emergence of targeted ads allows this to be accomplished affordably and with less wasted time wasted.

This advertising is facilitated by intermediaries using the “adtech stack,” through which advertisers and publishers are matched via auctions and ads ultimately are served to relevant users. This intermediation process has advanced enormously over the past three decades. Some now allege, however, that this market is being monopolized by its largest participant: Google.

A lawsuit filed by the State of Texas and nine other states in December 2020 alleges, among other things, that Google has engaged in anticompetitive conduct related to its online display advertising business. Those 10 original state plaintiffs were joined by another four states and the Commonwealth of Puerto Rico in March 2021, while South Carolina and Louisiana have also moved to be added as additional plaintiffs. Google also faces a pending antitrust lawsuit brought by the U.S. Justice Department (DOJ) and 14 states (originally 11) related to the company’s distribution agreements, as well as a separate action by the State of Utah, 35 other states, and the District of Columbia related to its search design.

In recent weeks, it has been reported that the DOJ may join the Texas suit or bring its own similar action against Google in the coming months. If it does, it should learn from the many misconceptions and errors in the Texas complaint that leave it on dubious legal and economic grounds.

​​Relevant market

The Texas complaint identifies at least five relevant markets within the adtech stack that it alleges Google either is currently monopolizing or is attempting to monopolize:

  1. Publisher ad servers;
  2. Display ad exchanges;
  3. Display ad networks;
  4. Ad-buying tools for large advertisers; and
  5. Ad-buying tools for small advertisers.

None of these constitute an economically relevant product market for antitrust purposes, since each “market” is defined according to how superficially similar the products are in function, not how substitutable they are. Nevertheless, the Texas complaint vaguely echoes how markets were conceived in the “Roadmap” for a case against Google’s advertising business, published last year by the Omidyar Network, which may ultimately influence any future DOJ complaint, as well.

The Omidyar Roadmap narrows the market from media advertising to digital advertising, then to the open supply of display ads, which comprises only 9% of the total advertising spending and less than 20% of digital advertising, as shown in the figure below. It then further narrows the defined market to the intermediation of the open supply of display ads. Once the market has been sufficiently narrowed, the Roadmap authors conclude that Google’s market share is “perhaps sufficient to confer market power.”

While whittling down the defined market may achieve the purposes of sketching a roadmap to prosecute Google, it also generates a mishmash of more than a dozen relevant markets for digital display and video advertising. In many of these, Google doesn’t have anything approaching market power, while, in some, Facebook is the most dominant player.

The Texas complaint adopts a non-economic approach to market definition.  It ignores potential substitutability between different kinds of advertising, both online and offline, which can serve as a competitive constraint on the display advertising market. The complaint considers neither alternative forms of display advertising, such as social media ads, nor alternative forms of advertising, such as search ads or non-digital ads—all of which can and do act as substitutes. It is possible, at the very least, that advertisers who choose to place ads on third-party websites may switch to other forms of advertising if the price of third-party website advertising was above competitive levels. To ignore this possibility, as the Texas complaint does, is to ignore the entire purpose of defining the relevant antitrust market altogether.

Offline advertising vs. online advertising

The fact that offline and online advertising employ distinct processes does not consign them to economically distinct markets. Indeed, online advertising has manifestly drawn advertisers from offline markets, just as previous technological innovations drew advertisers from other pre-existing channels.

Moreover, there is evidence that, in some cases, offline and online advertising are substitute products. For example, economists Avi Goldfarb and Catherine Tucker demonstrate that display advertising pricing is sensitive to the availability of offline alternatives. They conclude:

We believe our studies refute the hypothesis that online and offline advertising markets operate independently and suggest a default position of substitution. Online and offline advertising markets appear to be closely related. That said, it is important not to draw any firm conclusions based on historical behavior.

Display ads vs. search ads

There is perhaps even more reason to doubt that online display advertising constitutes a distinct, economically relevant market from online search advertising.

Although casual and ill-informed claims are often made to the contrary, various forms of targeted online advertising are significant competitors of each other. Bo Xing and Zhanxi Lin report firms spread their marketing budgets across these different sources of online marketing, and “search engine optimizers”—firms that help websites to maximize the likelihood of a valuable “top-of-list” organic search placement—attract significant revenue. That is, all of these different channels vie against each other for consumer attention and offer advertisers the ability to target their advertising based on data gleaned from consumers’ interactions with their platforms.

Facebook built a business on par with Google’s thanks in large part to advertising, by taking advantage of users’ more extended engagement with the platform to assess relevance and by enabling richer, more engaged advertising than previously appeared on Google Search. It’s an entirely different model from search, but one that has turned Facebook into a competitive ad platform.

And the market continues to shift. Somewhere between 37-56% of product searches start on Amazon, according to one survey, and advertisers have noticed. This is not surprising, given Amazon’s strong ability to match consumers with advertisements, and to do so when and where consumers are more likely to make a purchase.

‘Open’ display advertising vs. ‘owned-and-operated’ display advertising

The United Kingdom’s Competition and Markets Authority (like the Omidyar Roadmap report) has identified two distinct channels of display advertising, which they term “owned and operated” and “open.” The CMA concludes:

Over half of display expenditure is generated by Facebook, which owns both the Facebook platform and Instagram. YouTube has the second highest share of display advertising and is owned by Google. The open display market, in which advertisers buy inventory from many publishers of smaller scale (for example, newspapers and app providers) comprises around 32% of display expenditure.

The Texas complaint does not directly address the distinction between open and owned and operated, but it does allege anticompetitive conduct by Google with respect to YouTube in a separate “inline video advertising market.” 

The CMA finds that the owned-and-operated channel mostly comprises large social media platforms, which sell their own advertising inventory directly to advertisers or media agencies through self-service interfaces, such as Facebook Ads Manager or Snapchat Ads Manager.  In contrast, in the open display channel, publishers such as online newspapers and blogs sell their inventory to advertisers through a “complex chain of intermediaries.”  Through these, intermediaries run auctions that match advertisers’ ads to publisher inventory of ad space. In both channels, nearly all transactions are run through programmatic technology.

The CMA concludes that advertisers “largely see” the open and the owned-and-operated channels as substitutes. According to the CMA, an advertiser’s choice of one channel over the other is driven by each channel’s ability to meet the key performance metrics the advertising campaign is intended to achieve.

The Omidyar Roadmap argues, instead, that the CMA too narrowly focuses on the perspective of advertisers. The Roadmap authors claim that “most publishers” do not control supply that is “owned and operated.” As a result, they conclude that publishers “such as gardenandgun.com or hotels.com” do not have any owned-and-operated supply and can generate revenues from their supply “only through the Google-dominated adtech stack.” 

But this is simply not true. For example, in addition to inventory in its print media, Garden & Gun’s “Digital Media Kit” indicates that the publisher has several sources of owned-and-operated banner and video supply, including the desktop, mobile, and tablet ads on its website; a “homepage takeover” of its website; branded/sponsored content; its email newsletters; and its social media accounts. Hotels.com, an operating company of Expedia Group, has its own owned-and-operated search inventory, which it sells through its “Travel Ads Sponsored Listing,” as well owned-and-operated supply of standard and custom display ads.

Given that both perform the same function and employ similar mechanisms for matching inventory with advertisers, it is unsurprising that both advertisers and publishers appear to consider the owned-and-operated channel and the open channel to be substitutes.

The dystopian novel is a powerful literary genre. It has given us such masterpieces as Nineteen Eighty-Four, Brave New World, and Fahrenheit 451. Though these novels often shed light on the risks of contemporary society and the zeitgeist of the era in which they were written, they also almost always systematically overshoot the mark (intentionally or not) and severely underestimate the radical improvements that stem from the technologies (or other causes) that they fear.

But dystopias are not just a literary phenomenon; they are also a powerful force in policy circles. This is epitomized by influential publications such as The Club of Rome’s 1972 report The Limits of Growth, whose dire predictions of Malthusian catastrophe have largely failed to materialize.

In an article recently published in the George Mason Law Review, we argue that contemporary antitrust scholarship and commentary is similarly afflicted by dystopian thinking. In that respect, today’s antitrust pessimists have set their sights predominantly on the digital economy—”Big Tech” and “Big Data”—in the process of alleging a vast array of potential harms.

Scholars have notably argued that the data created and employed by the digital economy produces network effects that inevitably lead to tipping and to more concentrated markets (e.g., here and here). In other words, firms will allegedly accumulate insurmountable data advantages and thus thwart competitors for extended periods of time.

Some have gone so far as to argue that this threatens the very fabric of western democracy. For instance, parallels between the novel Nineteen Eighty-Four and the power of large digital platforms were plain to see when Epic Games launched an antitrust suit against Apple and its App Store in August 2020. The gaming company released a short video clip parodying Apple’s famous “1984” ad (which, upon its release, was itself widely seen as a critique of the tech incumbents of the time). Similarly, a piece in the New Statesman—titled “Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy”—concluded that:

Our lives and behaviour have been turned into profit for the Big Tech giants—and we meekly click ‘Accept.’ How did we sleepwalk into a world without privacy?

In our article, we argue that these fears are symptomatic of two different but complementary phenomena, which we refer to as “Antitrust Dystopia” and “Antitrust Nostalgia.”

Antitrust Dystopia is the pessimistic tendency among competition scholars and enforcers to assert that novel business conduct will cause technological advances to have unprecedented, anticompetitive consequences. This is almost always grounded in the belief that “this time is different”—that, despite the benign or positive consequences of previous, similar technological advances, this time those advances will have dire, adverse consequences absent enforcement to stave off abuse.

Antitrust Nostalgia is the biased assumption—often built into antitrust doctrine itself—that change is bad. Antitrust Nostalgia holds that, because a business practice has seemingly benefited competition before, changing it will harm competition going forward. Thus, antitrust enforcement is often skeptical of, and triggered by, various deviations from status quo conduct and relationships (i.e., “nonstandard” business arrangements) when change is, to a first approximation, the hallmark of competition itself.

Our article argues that these two worldviews are premised on particularly questionable assumptions about the way competition unfolds, in this case, in data-intensive markets.

The Case of Big Data Competition

The notion that digital markets are inherently more problematic than their brick-and-mortar counterparts—if there even is a meaningful distinction—is advanced routinely by policymakers, journalists, and other observers. The fear is that, left to their own devices, today’s dominant digital platforms will become all-powerful, protected by an impregnable “data barrier to entry.” Against this alarmist backdrop, nostalgic antitrust scholars have argued for aggressive antitrust intervention against the nonstandard business models and contractual arrangements that characterize these markets.

But as our paper demonstrates, a proper assessment of the attributes of data-intensive digital markets does not support either the dire claims or the proposed interventions.

  1. Data is information

One of the most salient features of the data created and consumed by online firms is that, jargon aside, it is just information. As with other types of information, it thus tends to have at least some traits usually associated with public goods (i.e., goods that are non-rivalrous in consumption and not readily excludable). As the National Bureau of Economic Research’s Catherine Tucker argues, data “has near-zero marginal cost of production and distribution even over long distances,” making it very difficult to exclude others from accessing it. Meanwhile, multiple economic agents can simultaneously use the same data, making it non-rivalrous in consumption.

As we explain in our paper, these features make the nature of modern data almost irreconcilable with the alleged hoarding and dominance that critics routinely associate with the tech industry.

2. Data is not scarce; expertise is

Another important feature of data is that it is ubiquitous. The predominant challenge for firms is not so much in obtaining data but, rather, in drawing useful insights from it. This has two important implications for antitrust policy.

First, although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As our survey of the empirical literature shows, data generally entails diminishing marginal returns:

Second, it is firms’ capabilities, rather than the data they own, that lead to success in the marketplace. Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around.

This dynamic can be seen at play in the early days of the search-engine market. In 2013, The Atlantic ran a piece titled “What the Web Looked Like Before Google.” By comparing the websites of Google and its rivals in 1998 (when Google Search was launched), the article shows how the current champion of search marked a radical departure from the status quo.

Even if it stumbled upon it by chance, Google immediately identified a winning formula for the search-engine market. It ditched the complicated classification schemes favored by its rivals and opted, instead, for a clean page with a single search box. This ensured that users could access the information they desired in the shortest possible amount of time—thanks, in part, to Google’s PageRank algorithm.

It is hardly surprising that Google’s rivals struggled to keep up with this shift in the search-engine industry. The theory of dynamic capabilities tells us that firms that have achieved success by indexing the web will struggle when the market rapidly moves toward a new paradigm (in this case, Google’s single search box and ten blue links). During the time it took these rivals to identify their weaknesses and repurpose their assets, Google kept on making successful decisions: notably, the introduction of Gmail, its acquisitions of YouTube and Android, and the introduction of Google Maps, among others.

Seen from this evolutionary perspective, Google thrived because its capabilities were perfect for the market at that time, while rivals were ill-adapted.

3.    Data as a byproduct of, and path to, platform monetization

Policymakers should also bear in mind that platforms often must go to great lengths in order to create data about their users—data that these same users often do not know about themselves. Under this framing, data is a byproduct of firms’ activity, rather than an input necessary for rivals to launch a business.

This is especially clear when one looks at the formative years of numerous online platforms. Most of the time, these businesses were started by entrepreneurs who did not own much data but, instead, had a brilliant idea for a service that consumers would value. Even if data ultimately played a role in the monetization of these platforms, it does not appear that it was necessary for their creation.

Data often becomes significant only at a relatively late stage in these businesses’ development. A quick glance at the digital economy is particularly revealing in this regard. Google and Facebook, in particular, both launched their platforms under the assumption that building a successful product would eventually lead to significant revenues.

It took five years from its launch for Facebook to start making a profit. Even at that point, when the platform had 300 million users, it still was not entirely clear whether it would generate most of its income from app sales or online advertisements. It was another three years before Facebook started to cement its position as one of the world’s leading providers of online ads. During this eight-year timespan, Facebook prioritized user growth over the monetization of its platform. The company appears to have concluded (correctly, it turns out) that once its platform attracted enough users, it would surely find a way to make itself highly profitable.

This might explain how Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace. And Facebook is no outlier. The list of companies that prevailed despite starting with little to no data (and initially lacking a data-dependent monetization strategy) is lengthy. Other examples include TikTok, Airbnb, Amazon, Twitter, PayPal, Snapchat, and Uber.

Those who complain about the unassailable competitive advantages enjoyed by companies with troves of data have it exactly backward. Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

We’ve Been Here Before: The Microsoft Antitrust Saga

Dystopian and nostalgic discussions concerning the power of successful technology firms are nothing new. Throughout recent history, there have been repeated calls for antitrust authorities to reign in these large companies. These calls for regulation have often led to increased antitrust scrutiny of some form. The Microsoft antitrust cases—which ran from the 1990s to the early 2010s on both sides of the Atlantic—offer a good illustration of the misguided “Antitrust Dystopia.”

In the mid-1990s, Microsoft was one of the most successful and vilified companies in America. After it obtained a commanding position in the desktop operating system market, the company sought to establish a foothold in the burgeoning markets that were developing around the Windows platform (many of which were driven by the emergence of the Internet). These included the Internet browser and media-player markets.

The business tactics employed by Microsoft to execute this transition quickly drew the ire of the press and rival firms, ultimately landing Microsoft in hot water with antitrust authorities on both sides of the Atlantic.

However, as we show in our article, though there were numerous calls for authorities to adopt a precautionary principle-type approach to dealing with Microsoft—and antitrust enforcers were more than receptive to these calls—critics’ worst fears never came to be.

This positive outcome is unlikely to be the result of the antitrust cases that were brought against Microsoft. In other words, the markets in which Microsoft operated seem to have self-corrected (or were misapprehended as competitively constrained) and, today, are generally seen as being unproblematic.

This is not to say that antitrust interventions against Microsoft were necessarily misguided. Instead, our critical point is that commentators and antitrust decisionmakers routinely overlooked or misinterpreted the existing and nonstandard market dynamics that ultimately prevented the worst anticompetitive outcomes from materializing. This is supported by several key factors.

First, the remedies that were imposed against Microsoft by antitrust authorities on both sides of the Atlantic were ultimately quite weak. It is thus unlikely that these remedies, by themselves, prevented Microsoft from dominating its competitors in adjacent markets.

Note that, if this assertion is wrong, and antitrust enforcement did indeed prevent Microsoft from dominating online markets, then there is arguably no need to reform the antitrust laws on either side of the Atlantic, nor even to adopt a particularly aggressive enforcement position. The remedies that were imposed on Microsoft were relatively localized. Accordingly, if antitrust enforcement did indeed prevent Microsoft from dominating other online markets, then it is antitrust enforcement’s deterrent effect that is to thank, and not the remedies actually imposed.

Second, Microsoft lost its bottleneck position. One of the biggest changes that took place in the digital space was the emergence of alternative platforms through which consumers could access the Internet. Indeed, as recently as January 2009, roughly 94% of all Internet traffic came from Windows-based computers. Just over a decade later, this number has fallen to about 31%. Android, iOS, and OS X have shares of roughly 41%, 16%, and 7%, respectively. Consumers can thus access the web via numerous platforms. The emergence of these alternatives reduced the extent to which Microsoft could use its bottleneck position to force its services on consumers in online markets.

Third, it is possible that Microsoft’s own behavior ultimately sowed the seeds of its relative demise. In particular, the alleged barriers to entry (rooted in nostalgic market definitions and skeptical analysis of “ununderstandable” conduct) that were essential to establishing the antitrust case against the company may have been pathways to entry as much as barriers.

Consider this error in the Microsoft court’s analysis of entry barriers: the court pointed out that new entrants faced a barrier that Microsoft didn’t face, in that Microsoft didn’t have to contend with a powerful incumbent impeding its entry by tying up application developers.

But while this may be true, Microsoft did face the absence of any developers at all, and had to essentially create (or encourage the creation of) businesses that didn’t previously exist. Microsoft thus created a huge positive externality for new entrants: existing knowledge and organizations devoted to software development, industry knowledge, reputation, awareness, and incentives for schools to offer courses. It could well be that new entrants, in fact, faced lower barriers with respect to app developers than did Microsoft when it entered.

In short, new entrants may face even more welcoming environments because of incumbents. This enabled Microsoft’s rivals to thrive.

Conclusion

Dystopian antitrust prophecies are generally doomed to fail, just like those belonging to the literary world. The reason is simple. While it is easy to identify what makes dominant firms successful in the present (i.e., what enables them to hold off competitors in the short term), it is almost impossible to conceive of the myriad ways in which the market could adapt. Indeed, it is today’s supra-competitive profits that spur the efforts of competitors.

Surmising that the economy will come to be dominated by a small number of successful firms is thus the same as believing that all market participants can be outsmarted by a few successful ones. This might occur in some cases or for some period of time, but as our article argues, it is bound to happen far less often than pessimists fear.

In short, dystopian scholars have not successfully made the case for precautionary antitrust. Indeed, the economic features of data make it highly unlikely that today’s tech giants could anticompetitively maintain their advantage for an indefinite amount of time, much less leverage this advantage in adjacent markets.

With this in mind, there is one dystopian novel that offers a fitting metaphor to end this Article. The Man in the High Castle tells the story of an alternate present, where Axis forces triumphed over the Allies during the second World War. This turns the dystopia genre on its head: rather than argue that the world is inevitably sliding towards a dark future, The Man in the High Castle posits that the present could be far worse than it is.

In other words, we should not take any of the luxuries we currently enjoy for granted. In the world of antitrust, critics routinely overlook that the emergence of today’s tech industry might have occurred thanks to, and not in spite of, existing antitrust doctrine. Changes to existing antitrust law should thus be dictated by a rigorous assessment of the various costs and benefits they would entail, rather than a litany of hypothetical concerns. The most recent wave of calls for antitrust reform have so far failed to clear this low bar.

Over the past decade and a half, virtually every branch of the federal government has taken steps to weaken the patent system. As reflected in President Joe Biden’s July 2021 executive order, these restraints on patent enforcement are now being coupled with antitrust policies that, in large part, adopt a “big is bad” approach in place of decades of economically grounded case law and agency guidelines.

This policy bundle is nothing new. It largely replicates the innovation policies pursued during the late New Deal and the postwar decades. That historical experience suggests that a “weak-patent/strong-antitrust” approach is likely to encourage neither innovation nor competition.

The Overlooked Shortfalls of New Deal Innovation Policy

Starting in the early 1930s, the U.S. Supreme Court issued a sequence of decisions that raised obstacles to patent enforcement. The Franklin Roosevelt administration sought to take this policy a step further, advocating compulsory licensing for all patents. While Congress did not adopt this proposal, it was partially implemented as a de facto matter through antitrust enforcement. Starting in the early 1940s and continuing throughout the postwar decades, the antitrust agencies secured judicial precedents that treated a broad range of licensing practices as per se illegal. Perhaps most dramatically, the U.S. Justice Department (DOJ) secured more than 100 compulsory licensing orders against some of the nation’s largest companies. 

The rationale behind these policies was straightforward. By compelling access to incumbents’ patented technologies, courts and regulators would lower barriers to entry and competition would intensify. The postwar economy declined to comply with policymakers’ expectations. Implementation of a weak-IP/strong-antitrust innovation policy over the course of four decades yielded the opposite of its intended outcome. 

Market concentration did not diminish, turnover in market leadership was slow, and private research and development (R&D) was confined mostly to the research labs of the largest corporations (who often relied on generous infusions of federal defense funding). These tendencies are illustrated by the dramatically unequal allocation of innovation capital in the postwar economy.  As of the late 1950s, small firms represented approximately 7% of all private U.S. R&D expenditures.  Two decades later, that figure had fallen even further. By the late 1970s, patenting rates had plunged, and entrepreneurship and innovation were in a state of widely lamented decline.

Why Weak IP Raises Entry Costs and Promotes Concentration

The decline in entrepreneurial innovation under a weak-IP regime was not accidental. Rather, this outcome can be derived logically from the economics of information markets.

Without secure IP rights to establish exclusivity, engage securely with business partners, and deter imitators, potential innovator-entrepreneurs had little hope to obtain funding from investors. In contrast, incumbents could fund R&D internally (or with federal funds that flowed mostly to the largest computing, communications, and aerospace firms) and, even under a weak-IP regime, were protected by difficult-to-match production and distribution efficiencies. As a result, R&D mostly took place inside the closed ecosystems maintained by incumbents such as AT&T, IBM, and GE.

Paradoxically, the antitrust campaign against patent “monopolies” most likely raised entry barriers and promoted industry concentration by removing a critical tool that smaller firms might have used to challenge incumbents that could outperform on every competitive parameter except innovation. While the large corporate labs of the postwar era are rightly credited with technological breakthroughs, incumbents such as AT&T were often slow in transforming breakthroughs in basic research into commercially viable products and services for consumers. Without an immediate competitive threat, there was no rush to do so. 

Back to the Future: Innovation Policy in the New New Deal

Policymakers are now at work reassembling almost the exact same policy bundle that ended in the innovation malaise of the 1970s, accompanied by a similar reliance on public R&D funding disbursed through administrative processes. However well-intentioned, these processes are inherently exposed to political distortions that are absent in an innovation environment that relies mostly on private R&D funding governed by price signals. 

This policy bundle has emerged incrementally since approximately the mid-2000s, through a sequence of complementary actions by every branch of the federal government.

  • In 2011, Congress enacted the America Invents Act, which enables any party to challenge the validity of an issued patent through the U.S. Patent and Trademark Office’s (USPTO) Patent Trial and Appeals Board (PTAB). Since PTAB’s establishment, large information-technology companies that advocated for the act have been among the leading challengers.
  • In May 2021, the Office of the U.S. Trade Representative (USTR) declared its support for a worldwide suspension of IP protections over Covid-19-related innovations (rather than adopting the more nuanced approach of preserving patent protections and expanding funding to accelerate vaccine distribution).  
  • President Biden’s July 2021 executive order states that “the Attorney General and the Secretary of Commerce are encouraged to consider whether to revise their position on the intersection of the intellectual property and antitrust laws, including by considering whether to revise the Policy Statement on Remedies for Standard-Essential Patents Subject to Voluntary F/RAND Commitments.” This suggests that the administration has already determined to retract or significantly modify the 2019 joint policy statement in which the DOJ, USPTO, and the National Institutes of Standards and Technology (NIST) had rejected the view that standard-essential patent owners posed a high risk of patent holdup, which would therefore justify special limitations on enforcement and licensing activities.

The history of U.S. technology markets and policies casts great doubt on the wisdom of this weak-IP policy trajectory. The repeated devaluation of IP rights is likely to be a “lose-lose” approach that does little to promote competition, while endangering the incentive and transactional structures that sustain robust innovation ecosystems. A weak-IP regime is particularly likely to disadvantage smaller firms in biotech, medical devices, and certain information-technology segments that rely on patents to secure funding from venture capital and to partner with larger firms that can accelerate progress toward market release. The BioNTech/Pfizer alliance in the production and distribution of a Covid-19 vaccine illustrates how patents can enable such partnerships to accelerate market release.  

The innovative contribution of BioNTech is hardly a one-off occurrence. The restoration of robust patent protection in the early 1980s was followed by a sharp increase in the percentage of private R&D expenditures attributable to small firms, which jumped from about 5% as of 1980 to 21% by 1992. This contrasts sharply with the unequal allocation of R&D activities during the postwar period.

Remarkably, the resurgence of small-firm innovation following the strong-IP policy shift, starting in the late 20th century, mimics tendencies observed during the late 19th and early-20th centuries, when U.S. courts provided a hospitable venue for patent enforcement; there were few antitrust constraints on licensing activities; and innovation was often led by small firms in partnership with outside investors. This historical pattern, encompassing more than a century of U.S. technology markets, strongly suggests that strengthening IP rights tends to yield a policy “win-win” that bolsters both innovative and competitive intensity. 

An Alternate Path: ‘Bottom-Up’ Innovation Policy

To be clear, the alternative to the policy bundle of weak-IP/strong antitrust does not consist of a simple reversion to blind enforcement of patents and lax administration of the antitrust laws. A nuanced innovation policy would couple modern antitrust’s commitment to evidence-based enforcement—which, in particular cases, supports vigorous intervention—with a renewed commitment to protecting IP rights for innovator-entrepreneurs. That would promote competition from the “bottom up” by bolstering maverick innovators who are well-positioned to challenge (or sometimes partner with) incumbents and maintaining the self-starting engine of creative disruption that has repeatedly driven entrepreneurial innovation environments. Tellingly, technology incumbents have often been among the leading advocates for limiting patent and copyright protections.  

Advocates of a weak-patent/strong-antitrust policy believe it will enhance competitive and innovative intensity in technology markets. History suggests that this combination is likely to produce the opposite outcome.  

Jonathan M. Barnett is the Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law. This post is based on the author’s recent publications, Innovators, Firms, and Markets: The Organizational Logic of Intellectual Property (Oxford University Press 2021) and “The Great Patent Grab,” in Battles Over Patents: History and the Politics of Innovation (eds. Stephen H. Haber and Naomi R. Lamoreaux, Oxford University Press 2021).

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

The European Commission recently issued a formal Statement of Objections (SO) in which it charges Apple with antitrust breach. In a nutshell, the commission argues that Apple prevents app developers—in this case, Spotify—from using alternative in-app purchase systems (IAPs) other than Apple’s own, or steering them towards other, cheaper payment methods on another site. This, the commission says, results in higher prices for consumers in the audio streaming and ebook/audiobook markets.

More broadly, the commission claims that Apple’s App Store rules may distort competition in markets where Apple competes with rival developers (such as how Apple Music competes with Spotify). This explains why the anticompetitive concerns raised by Spotify regarding the Apple App Store rules have now expanded to Apple’s e-books, audiobooks and mobile payments platforms.

However, underlying market realities cast doubt on the commission’s assessment. Indeed, competition from Google Play and other distribution mediums makes it difficult to state unequivocally that the relevant market should be limited to Apple products. Likewise, the conduct under investigation arguably solves several problems relating to platform dynamics, and consumers’ privacy and security.

Should the relevant market be narrowed to iOS?

An important first question is whether there is a distinct, antitrust-relevant market for “music streaming apps distributed through the Apple App Store,” as the EC posits.

This market definition is surprising, given that it is considerably narrower than the one suggested by even the most enforcement-minded scholars. For instance, Damien Geradin and Dimitrias Katsifis—lawyers for app developers opposed to Apple—define the market as “that of app distribution on iOS devices, a two-sided transaction market on which Apple has a de facto monopoly.” Similarly, a report by the Dutch competition authority declared that the relevant market was limited to the iOS App Store, due to the lack of interoperability with other systems.

The commission’s decisional practice has been anything but constant in this space. In the Apple/Shazam and Apple/Beats cases, it did not place competing mobile operating systems and app stores in separate relevant markets. Conversely, in the Google Android decision, the commission found that the Android OS and Apple’s iOS, including Google Play and Apple’s App Store, did not compete in the same relevant market. The Spotify SO seems to advocate for this definition, narrowing it even further to music streaming services.

However, this narrow definition raises several questions. Market definition is ultimately about identifying the competitive constraints that the firm under investigation faces. As Gregory Werden puts it: “the relevant market in an antitrust case […] identifies the competitive process alleged to be harmed.”

In that regard, there is clearly some competition between Apple’s App Store, Google Play and other app stores (whether this is sufficient to place them in the same relevant market is an empirical question).

This view is supported by the vast number of online posts comparing Android and Apple and advising consumers on their purchasing options. Moreover, the growth of high-end Android devices that compete more directly with the iPhone has reinforced competition between the two firms. Likewise, Apple has moved down the value chain; the iPhone SE, priced at $399, competes with other medium-range Android devices.

App developers have also suggested they view Apple and Android as alternatives. They take into account technical differences to decide between the two, meaning that these two platforms compete with each other for developers.

All of this suggests that the App Store may be part of a wider market for the distribution of apps and services, where Google Play and other app stores are included—though this is ultimately an empirical question (i.e., it depends on the degree of competition between both platforms)

If the market were defined this way, Apple would not even be close to holding a dominant position—a prerequisite for European competition intervention. Indeed, Apple only sold 27.43% of smartphones in March 2021. Similarly, only 30.41% of smartphones in use run iOS, as of March 2021. This is well below the lowest market share in a European abuse of dominance—39.7% in the British Airways decision.

The sense that Apple and Android compete for users and developers is reinforced by recent price movements. Apple dropped its App Store commission fees from 30% to 15% in November 2020 and Google followed suit in March 2021. This conduct is consistent with at least some degree of competition between the platforms. It is worth noting that other firms, notably Microsoft, have so far declined to follow suit (except for gaming apps).

Barring further evidence, neither Apple’s market share nor its behavior appear consistent with the commission’s narrow market definition.

Are Apple’s IAP system rules and anti-steering provisions abusive?

The commission’s case rests on the idea that Apple leverages its IAP system to raise the costs of rival app developers:

 “Apple’s rules distort competition in the market for music streaming services by raising the costs of competing music streaming app developers. This in turn leads to higher prices for consumers for their in-app music subscriptions on iOS devices. In addition, Apple becomes the intermediary for all IAP transactions and takes over the billing relationship, as well as related communications for competitors.”

However, expropriating rents from these developers is not nearly as attractive as it might seem. The report of the Dutch competition notes that “attracting and maintaining third-party developers that increase the value of the ecosystem” is essential for Apple. Indeed, users join a specific platform because it provides them with a wide number of applications they can use on their devices. And the opposite applies to developers. Hence, the loss of users on either or both sides reduces the value provided by the Apple App Store. Following this logic, it would make no sense for Apple to systematically expropriate developers. This might partly explain why Apple’s fees are only 30%-15%, since in principle they could be much higher.

It is also worth noting that Apple’s curated App Store and IAP have several redeeming virtues. Apple offers “a highly curated App Store where every app is reviewed by experts and an editorial team helps users discover new apps every day.”  While this has arguably turned the App Store into a relatively closed platform, it provides users with the assurance that the apps they find there will meet a standard of security and trustworthiness.

As noted by the Dutch competition authority, “one of the reasons why the App Store is highly valued is because of the strict review process. Complaints about malware spread via an app downloaded in the App Store are rare.” Apple provides users with a special degree of privacy and security. Indeed, Apple stopped more than $1.5 billion in potentially fraudulent transactions in 2020, proving that the security protocols are not only necessary, but also effective. In this sense, the App Store Review Guidelines are considered the first line of defense against fraud and privacy breaches.

It is also worth noting that Apple only charges a nominal fee for iOS developer kits and no fees for in-app advertising. The IAP is thus essential for Apple to monetize the platform and to cover the costs associated with running the platform (note that Apple does make money on device sales, but that revenue is likely constrained by competition between itself and Android). When someone downloads Spotify from the App Store, Apple does not get paid, but Spotify does get a new client. Thus, while independent developers bear the costs of the app fees, Apple bears the costs and risks of running the platform itself.

For instance, Apple’s App Store Team is divided into smaller teams: the Editorial Design team, the Business Operations team, and the Engineering R&D team. These teams each have employees, budgets, and resources for which Apple needs to pay. If the revenues stopped, one can assume that Apple would have less incentive to sustain all these teams that preserve the App Store’s quality, security, and privacy parameters.

Indeed, the IAP system itself provides value to the Apple App Store. Instead of charging all of the apps it provides, it takes a share of the income from some of them. As a result, large developers that own in-app sales contribute to the maintenance of the platform, while smaller ones are still offered to consumers without having to contribute economically. This boosts Apple’s App Store diversity and supply of digital goods and services.

If Apple was forced to adopt another system, it could start charging higher prices for access to its interface and tools, leading to potential discrimination against the smaller developers. Or, Apple could increase the prices of handset devices, thus incurring higher costs for consumers who do not purchase digital goods. Therefore, there are no apparent alternatives to the current IAP that satisfy the App Store’s goals in the same way.

As the Apple Review Guidelines emphasize, “for everything else there is always the open Internet.” Netflix and Spotify have ditched the subscription options from their app, and they are still among the top downloaded apps in iOS. The IAP system is therefore not compulsory to be successful in Apple’s ecosystem, and developers are free to drop Apple Review Guidelines.

Conclusion

The commission’s case against Apple is based on shaky foundations. Not only is the market definition extremely narrow—ignoring competition from Android, among others—but the behavior challenged by the commission has a clear efficiency-enhancing rationale. Of course, both of these critiques ultimately boil down to empirical questions that the commission will have overcome before it reaches a final decision. In the meantime, the jury is out.

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.

Critics of big tech companies like Google and Amazon are increasingly focused on the supposed evils of “self-preferencing.” This refers to when digital platforms like Amazon Marketplace or Google Search, which connect competing services with potential customers or users, also offer (and sometimes prioritize) their own in-house products and services. 

The objection, raised by several members and witnesses during a Feb. 25 hearing of the House Judiciary Committee’s antitrust subcommittee, is that it is unfair to third parties that use those sites to allow the site’s owner special competitive advantages. Is it fair, for example, for Amazon to use the data it gathers from its service to design new products if third-party merchants can’t access the same data? This seemingly intuitive complaint was the basis for the European Commission’s landmark case against Google

But we cannot assume that something is bad for competition just because it is bad for certain competitors. A lot of unambiguously procompetitive behavior, like cutting prices, also tends to make life difficult for competitors. The same is true when a digital platform provides a service that is better than alternatives provided by the site’s third-party sellers. 

It’s probably true that Amazon’s access to customer search and purchase data can help it spot products it can undercut with its own versions, driving down prices. But that’s not unusual; most retailers do this, many to a much greater extent than Amazon. For example, you can buy AmazonBasics batteries for less than half the price of branded alternatives, and they’re pretty good.

There’s no doubt this is unpleasant for merchants that have to compete with these offerings. But it is also no different from having to compete with more efficient rivals who have lower costs or better insight into consumer demand. Copying products and seeking ways to offer them with better features or at a lower price, which critics of self-preferencing highlight as a particular concern, has always been a fundamental part of market competition—indeed, it is the primary way competition occurs in most markets. 

Store-branded versions of iPhone cables and Nespresso pods are certainly inconvenient for those companies, but they offer consumers cheaper alternatives. Where such copying may be problematic (say, by deterring investments in product innovations), the law awards and enforces patents and copyrights to reward novel discoveries and creative works, and trademarks to protect brand identity. But in the absence of those cases where a company has intellectual property, this is simply how competition works. 

The fundamental question is “what benefits consumers?” Services like Yelp object that they cannot compete with Google when Google embeds its Google Maps box in Google Search results, while Yelp cannot do the same. But for users, the Maps box adds valuable information to the results page, making it easier to get what they want. Google is not making Yelp worse by making its own product better. Should it have to refrain from offering services that benefit its users because doing so might make competing products comparatively less attractive?

Self-preferencing also enables platforms to promote their offerings in other markets, which is often how large tech companies compete with each other. Amazon has a photo-hosting app that competes with Google Photos and Apple’s iCloud. It recently emailed its customers to promote it. That is undoubtedly self-preferencing, since other services cannot market themselves to Amazon’s customers like this, but if it makes customers aware of an alternative they might not have otherwise considered, that is good for competition. 

This kind of behavior also allows companies to invest in offering services inexpensively, or for free, that they intend to monetize by preferencing their other, more profitable products. For example, Google invests in Android’s operating system and gives much of it away for free precisely because it can encourage Android customers to use the profitable Google Search service. Despite claims to the contrary, it is difficult to see this sort of cross-subsidy as harmful to consumers.

Self-preferencing can even be good for competing services, including third-party merchants. In many cases, it expands the size of their potential customer base. For example, blockbuster video games released by Sony and Microsoft increase demand for games by other publishers because they increase the total number of people who buy Playstations and Xboxes. This effect is clear on Amazon’s Marketplace, which has grown enormously for third-party merchants even as Amazon has increased the number of its own store-brand products on the site. By making the Amazon Marketplace more attractive, third-party sellers also benefit.

All platforms are open or closed to varying degrees. Retail “platforms,” for example, exist on a spectrum on which Craigslist is more open and neutral than eBay, which is more so than Amazon, which is itself relatively more so than, say, Walmart.com. Each position on this spectrum offers its own benefits and trade-offs for consumers. Indeed, some customers’ biggest complaint against Amazon is that it is too open, filled with third parties who leave fake reviews, offer counterfeit products, or have shoddy returns policies. Part of the role of the site is to try to correct those problems by making better rules, excluding certain sellers, or just by offering similar options directly. 

Regulators and legislators often act as if the more open and neutral, the better, but customers have repeatedly shown that they often prefer less open, less neutral options. And critics of self-preferencing frequently find themselves arguing against behavior that improves consumer outcomes, because it hurts competitors. But that is the nature of competition: what’s good for consumers is frequently bad for competitors. If we have to choose, it’s consumers who should always come first.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

The U.S. Department of Justice’s (DOJ) antitrust case against Google, which was filed in October 2020, will be a tough slog.[1] It is an alleged monopolization (Sherman Act, Sec. 2) case; and monopolization cases are always a tough slog.

In this brief essay I will lay out some of the issues in the case and raise an intriguing possibility.

What is the case about?

The case is about exclusivity and exclusion in the distribution of search engine services; that Google paid substantial sums to Apple and to the manufacturers of Android-based mobile phones and tablets and also to wireless carriers and web-browser proprietors—in essence, to distributors—to install the Google search engine as the exclusive pre-set (installed), default search program. The suit alleges that Google thereby made it more difficult for other search-engine providers (e.g., Bing; DuckDuckGo) to obtain distribution for their search-engine services and thus to attract search-engine users and to sell the online advertising that is associated with search-engine use and that provides the revenue to support the search “platform” in this “two-sided market” context.[2]

Exclusion can be seen as a form of “raising rivals’ costs.”[3]  Equivalently, exclusion can be seen as a form of non-price predation. Under either interpretation, the exclusionary action impedes competition.

It’s important to note that these allegations are different from those that motivated an investigation by the Federal Trade Commission (which the FTC dropped in 2013) and the cases by the European Union against Google.[4]  Those cases focused on alleged self-preferencing; that Google was unduly favoring its own products and services (e.g., travel services) in its delivery of search results to users of its search engine. In those cases, the impairment of competition (arguably) happens with respect to those competing products and services, not with respect to search itself.

What is the relevant market?

For a monopolization allegation to have any meaning, there needs to be the exercise of market power (which would have adverse consequences for the buyers of the product). And in turn, that exercise of market power needs to occur in a relevant market: one in which market power can be exercised.

Here is one of the important places where the DOJ’s case is likely to turn into a slog: the delineation of a relevant market for alleged monopolization cases remains as a largely unsolved problem for antitrust economics.[5]  This is in sharp contrast to the issue of delineating relevant markets for the antitrust analysis of proposed mergers.  For this latter category, the paradigm of the “hypothetical monopolist” and the possibility that this hypothetical monopolist could prospectively impose a “small but significant non-transitory increase in price” (SSNIP) has carried the day for the purposes of market delineation.

But no such paradigm exists for monopolization cases, in which the usual allegation is that the defendant already possesses market power and has used the exclusionary actions to buttress that market power. To see the difficulties, it is useful to recall the basic monopoly diagram from Microeconomics 101. A monopolist faces a negatively sloped demand curve for its product (at higher prices, less is bought; at lower prices, more is bought) and sets a profit-maximizing price at the level of output where its marginal revenue (MR) equals its marginal costs (MC). Its price is thereby higher than an otherwise similar competitive industry’s price for that product (to the detriment of buyers) and the monopolist earns higher profits than would the competitive industry.

But unless there are reliable benchmarks as to what the competitive price and profits would otherwise be, any information as to the defendant’s price and profits has little value with respect to whether the defendant already has market power. Also, a claim that a firm does not have market power because it faces rivals and thus isn’t able profitably to raise its price from its current level (because it would lose too many sales to those rivals) similarly has no value. Recall the monopolist from Micro 101. It doesn’t set a higher price than the one where MR=MC, because it would thereby lose too many sales to other sellers of other things.

Thus, any firm—regardless of whether it truly has market power (like the Micro 101 monopolist) or is just another competitor in a sea of competitors—should have already set its price at its profit-maximizing level and should find it unprofitable to raise its price from that level.[6]  And thus the claim, “Look at all of the firms that I compete with!  I don’t have market power!” similarly has no informational value.

Let us now bring this problem back to the Google monopolization allegation:  What is the relevant market?  In the first instance, it has to be “the provision of answers to user search queries.” After all, this is the “space” in which the exclusion occurred. But there are categories of search: e.g., search for products/services, versus more general information searches (“What is the current time in Delaware?” “Who was the 21st President of the United States?”). Do those separate categories themselves constitute relevant markets?

Further, what would the exercise of market power in a (delineated relevant) market look like?  Higher-than-competitive prices for advertising that targets search-results recipients is one obvious answer (but see below). In addition, because this is a two-sided market, the competitive “price” (or prices) might involve payments by the search engine to the search users (in return for their exposure to the lucrative attached advertising).[7]  And product quality might exhibit less variety than a competitive market would provide; and/or the monopolistic average level of quality would be lower than in a competitive market: e.g., more abuse of user data, and/or deterioration of the delivered information itself, via more self-preferencing by the search engine and more advertising-driven preferencing of results.[8]

In addition, a natural focus for a relevant market is the advertising that accompanies the search results. But now we are at the heart of the difficulty of delineating a relevant market in a monopolization context. If the relevant market is “advertising on search engine results pages,” it seems highly likely that Google has market power. If the relevant market instead is all online U.S. advertising (of which Google’s revenue share accounted for 32% in 2019[9]), then the case is weaker; and if the relevant market is all advertising in the United States (which is about twice the size of online advertising[10]), the case is weaker still. Unless there is some competitive benchmark, there is no easy way to delineate the relevant market.[11]

What exactly has Google been paying for, and why?

As many critics of the DOJ’s case have pointed out, it is extremely easy for users to switch their default search engine. If internet search were a normal good or service, this ease of switching would leave little room for the exercise of market power. But in that case, why is Google willing to pay $8-$12 billion annually for the exclusive default setting on Apple devices and large sums to the manufacturers of Android-based devices (and to wireless carriers and browser proprietors)? Why doesn’t Google instead run ads in prominent places that remind users how superior Google’s search results are and how easy it is for users (if they haven’t already done so) to switch to the Google search engine and make Google the user’s default choice?

Suppose that user inertia is important. Further suppose that users generally have difficulty in making comparisons with respect to the quality of delivered search results. If this is true, then being the default search engine on Apple and Android-based devices and on other distribution vehicles would be valuable. In this context, the inertia of their customers is a valuable “asset” of the distributors that the distributors may not be able to take advantage of, but that Google can (by providing search services and selling advertising). The question of whether Google’s taking advantage of this user inertia means that Google exercises market power takes us back to the issue of delineating the relevant market.

There is a further wrinkle to all of this. It is a well-understood concept in antitrust economics that an incumbent monopolist will be willing to pay more for the exclusive use of an essential input than a challenger would pay for access to the input.[12] The basic idea is straightforward. By maintaining exclusive use of the input, the incumbent monopolist preserves its (large) monopoly profits. If the challenger enters, the incumbent will then earn only its share of the (much lower, more competitive) duopoly profits. Similarly, the challenger can expect only the lower duopoly profits. Accordingly, the incumbent should be willing to outbid (and thereby exclude) the challenger and preserve the incumbent’s exclusive use of the input, so as to protect those monopoly profits.

To bring this to the Google monopolization context, if Google does possess market power in some aspect of search—say, because online search-linked advertising is a relevant market—then Google will be willing to outbid Microsoft (which owns Bing) for the “asset” of default access to Apple’s (inertial) device owners. That Microsoft is a large and profitable company and could afford to match (or exceed) Google’s payments to Apple is irrelevant. If the duopoly profits for online search-linked advertising would be substantially lower than Google’s current profits, then Microsoft would not find it worthwhile to try to outbid Google for that default access asset.

Alternatively, this scenario could be wholly consistent with an absence of market power. If search users (who can easily switch) consider Bing to be a lower-quality search service, then large payments by Microsoft to outbid Google for those exclusive default rights would be largely wasted, since the “acquired” default search users would quickly switch to Google (unless Microsoft provided additional incentives for the users not to switch).

But this alternative scenario returns us to the original puzzle:  Why is Google making such large payments to the distributors for those exclusive default rights?

An intriguing possibility

Consider the following possibility. Suppose that Google was paying that $8-$12 billion annually to Apple in return for the understanding that Apple would not develop its own search engine for Apple’s device users.[13] This possibility was not raised in the DOJ’s complaint, nor is it raised in the subsequent suits by the state attorneys general.

But let’s explore the implications by going to an extreme. Suppose that Google and Apple had a formal agreement that—in return for the $8-$12 billion per year—Apple would not develop its own search engine. In this event, this agreement not to compete would likely be seen as a violation of Section 1 of the Sherman Act (which does not require a market delineation exercise) and Apple would join Google as a co-conspirator. The case would take on the flavor of the FTC’s prosecution of “pay-for-delay” agreements between the manufacturers of patented pharmaceuticals and the generic drug manufacturers that challenge those patents and then receive payments from the former in return for dropping the patent challenge and delaying the entry of the generic substitute.[14]

As of this writing, there is no evidence of such an agreement and it seems quite unlikely that there would have been a formal agreement. But the DOJ will be able to engage in discovery and take depositions. It will be interesting to find out what the relevant executives at Google—and at Apple—thought was being achieved by those payments.

What would be a suitable remedy/relief?

The DOJ’s complaint is vague with respect to the remedy that it seeks. This is unsurprising. The DOJ may well want to wait to see how the case develops and then amend its complaint.

However, even if Google’s actions have constituted monopolization, it is difficult to conceive of a suitable and effective remedy. One apparently straightforward remedy would be to require simply that Google not be able to purchase exclusivity with respect to the pre-set default settings. In essence, the device manufacturers and others would always be able to sell parallel default rights to other search engines: on the basis, say, that the default rights for some categories of customers—or even a percentage of general customers (randomly selected)—could be sold to other search-engine providers.

But now the Gilbert-Newbery insight comes back into play. Suppose that a device manufacturer knows (or believes) that Google will pay much more if—even in the absence of any exclusivity agreement—Google ends up being the pre-set search engine for all (or nearly all) of the manufacturer’s device sales, as compared with what the manufacturer would receive if those default rights were sold to multiple search-engine providers (including, but not solely, Google). Can that manufacturer (recall that the distributors are not defendants in the case) be prevented from making this sale to Google and thus (de facto) continuing Google’s exclusivity?[15]

Even a requirement that Google not be allowed to make any payment to the distributors for a default position may not improve the competitive environment. Google may be able to find other ways of making indirect payments to distributors in return for attaining default rights, e.g., by offering them lower rates on their online advertising.

Further, if the ultimate goal is an efficient outcome in search, it is unclear how far restrictions on Google’s bidding behavior should go. If Google were forbidden from purchasing any default installation rights for its search engine, would (inert) consumers be better off? Similarly, if a distributor were to decide independently that its customers were better served by installing the Google search engine as the default, would that not be allowed? But if it is allowed, how could one be sure that Google wasn’t indirectly paying for this “independent” decision (e.g., through favorable advertising rates)?

It’s important to remember that this (alleged) monopolization is different from the Standard Oil case of 1911 or even the (landline) AT&T case of 1984. In those cases, there were physical assets that could be separated and spun off to separate companies. For Google, physical assets aren’t important. Although it is conceivable that some of Google’s intellectual property—such as Gmail, YouTube, or Android—could be spun off to separate companies, doing so would do little to cure the (arguably) fundamental problem of the inert device users.

In addition, if there were an agreement between Google and Apple for the latter not to develop a search engine, then large fines for both parties would surely be warranted. But what next? Apple can’t be forced to develop a search engine.[16] This differentiates such an arrangement from the “pay-for-delay” arrangements for pharmaceuticals, where the generic manufacturers can readily produce a near-identical substitute for the patented drug and are otherwise eager to do so.

At the end of the day, forbidding Google from paying for exclusivity may well be worth trying as a remedy. But as the discussion above indicates, it is unlikely to be a panacea and is likely to require considerable monitoring for effective enforcement.

Conclusion

The DOJ’s case against Google will be a slog. There are unresolved issues—such as how to delineate a relevant market in a monopolization case—that will be central to the case. Even if the DOJ is successful in showing that Google violated Section 2 of the Sherman Act in monopolizing search and/or search-linked advertising, an effective remedy seems problematic. But there also remains the intriguing question of why Google was willing to pay such large sums for those exclusive default installation rights?

The developments in the case will surely be interesting.


[1] The DOJ’s suit was joined by 11 states.  More states subsequently filed two separate antitrust lawsuits against Google in December.

[2] There is also a related argument:  That Google thereby gained greater volume, which allowed it to learn more about its search users and their behavior, and which thereby allowed it to provide better answers to users (and thus a higher-quality offering to its users) and better-targeted (higher-value) advertising to its advertisers.  Conversely, Google’s search-engine rivals were deprived of that volume, with the mirror-image negative consequences for the rivals.  This is just another version of the standard “learning-by-doing” and the related “learning curve” (or “experience curve”) concepts that have been well understood in economics for decades.

[3] See, for example, Steven C. Salop and David T. Scheffman, “Raising Rivals’ Costs: Recent Advances in the Theory of Industrial Structure,” American Economic Review, Vol. 73, No. 2 (May 1983), pp.  267-271; and Thomas G. Krattenmaker and Steven C. Salop, “Anticompetitive Exclusion: Raising Rivals’ Costs To Achieve Power Over Price,” Yale Law Journal, Vol. 96, No. 2 (December 1986), pp. 209-293.

[4] For a discussion, see Richard J. Gilbert, “The U.S. Federal Trade Commission Investigation of Google Search,” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 489-513.

[5] For a more complete version of the argument that follows, see Lawrence J. White, “Market Power and Market Definition in Monopolization Cases: A Paradigm Is Missing,” in Wayne D. Collins, ed., Issues in Competition Law and Policy. American Bar Association, 2008, pp. 913-924.

[6] The forgetting of this important point is often termed “the cellophane fallacy”, since this is what the U.S. Supreme Court did in a 1956 antitrust case in which the DOJ alleged that du Pont had monopolized the cellophane market (and du Pont, in its defense claimed that the relevant market was much wider: all flexible wrapping materials); see U.S. v. du Pont, 351 U.S. 377 (1956).  For an argument that profit data and other indicia argued for cellophane as the relevant market, see George W. Stocking and Willard F. Mueller, “The Cellophane Case and the New Competition,” American Economic Review, Vol. 45, No. 1 (March 1955), pp. 29-63.

[7] In the context of differentiated services, one would expect prices (positive or negative) to vary according to the quality of the service that is offered.  It is worth noting that Bing offers “rewards” to frequent searchers; see https://www.microsoft.com/en-us/bing/defaults-rewards.  It is unclear whether this pricing structure of payment to Bing’s customers represents what a more competitive framework in search might yield, or whether the payment just indicates that search users consider Bing to be a lower-quality service.

[8] As an additional consequence of the impairment of competition in this type of search market, there might be less technological improvement in the search process itself – to the detriment of users.

[9] As estimated by eMarketer: https://www.emarketer.com/newsroom/index.php/google-ad-revenues-to-drop-for-the-first-time/.

[10] See https://www.visualcapitalist.com/us-advertisers-spend-20-years/.

[11] And, again, if we return to the du Pont cellophane case:  Was the relevant market cellophane?  Or all flexible wrapping materials?

[12] This insight is formalized in Richard J. Gilbert and David M.G. Newbery, “Preemptive Patenting and the Persistence of Monopoly,” American Economic Review, Vol. 72, No. 3 (June 1982), pp. 514-526.

[13] To my knowledge, Randal C. Picker was the first to suggest this possibility; see https://www.competitionpolicyinternational.com/a-first-look-at-u-s-v-google/.  Whether Apple would be interested in trying to develop its own search engine – given the fiasco a decade ago when Apple tried to develop its own maps app to replace the Google maps app – is an open question.  In addition, the Gilbert-Newbery insight applies here as well:  Apple would be less inclined to invest the substantial resources that would be needed to develop a search engine when it is thereby in a duopoly market.  But Google might be willing to pay “insurance” to reinforce any doubts that Apple might have.

[14] The U.S. Supreme Court, in FTC v. Actavis, 570 U.S. 136 (2013), decided that such agreements could be anti-competitive and should be judged under the “rule of reason”.  For a discussion of the case and its implications, see, for example, Joseph Farrell and Mark Chicu, “Pharmaceutical Patents and Pay-for-Delay: Actavis (2013),” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 331-353.

[15] This is an example of the insight that vertical arrangements – in this case combined with the Gilbert-Newbery effect – can be a way for dominant firms to raise rivals’ costs.  See, for example, John Asker and Heski Bar-Isaac. 2014. “Raising Retailers’ Profits: On Vertical Practices and the Exclusion of Rivals.” American Economic Review, Vol. 104, No. 2 (February 2014), pp. 672-686.

[16] And, again, for the reasons discussed above, Apple might not be eager to make the effort.