Lina Khan’s appointment as chair of the Federal Trade Commission (FTC) is a remarkable accomplishment. At 32 years old, she is the youngest chair ever. Her longstanding criticisms of the Consumer Welfare Standard and alignment with the neo-Brandeisean school of thought make her appointment a significant achievement for proponents of those viewpoints.
Her appointment also comes as House Democrats are preparing to mark up five bills designed to regulate Big Tech and, in the process, vastly expand the FTC’s powers. This expansion may combine with Khan’s appointment in ways that lawmakers considering the bills have not yet considered.
As things stand, the FTC under Khan’s leadership is likely to push for more extensive regulatory powers, akin to those held by the Federal Communications Commission (FCC). But these expansions would be trivial compared to what is proposed by many of the bills currently being prepared for a June 23 mark-up in the House Judiciary Committee.
The flagship bill—Rep. David Cicilline’s (D-R.I.) American Innovation and Choice Online Act—is described as a platform “non-discrimination” bill. I have already discussed what the real-world effects of this bill would likely be. Briefly, it would restrict platforms’ ability to offer richer, more integrated services at all, since those integrations could be challenged as “discrimination” at the cost of would-be competitors’ offerings. Things like free shipping on Amazon Prime, pre-installed apps on iPhones, or even including links to Gmail and Google Calendar at the top of a Google Search page could be precluded under the bill’s terms; in each case, there is a potential competitor being undermined.
But this shifts the focus to the FTC itself, and implies that it would have potentially enormous discretionary power under these proposals to enforce the law selectively.
Companies found guilty of breaching the bill’s terms would be liable for civil penalties of up to 15 percent of annual U.S. revenue, a potentially significant sum. And though the Supreme Court recently ruled unanimously against the FTC’s powers to levy civil fines unilaterally—which the FTC opposed vociferously, and may get restored by other means—there are two scenarios through which it could end up getting extraordinarily extensive control over the platforms covered by the bill.
The first course is through selective enforcement. What Singer above describes as a positive—the fact that enforcers would just let “benign” violations of the law be—would mean that the FTC itself would have tremendous scope to choose which cases it brings, and might do so for idiosyncratic, politicized reasons.
The second path would be to use these powers as leverage to get broad consent decrees to govern the conduct of covered platforms. These occur when a lawsuit is settled, with the defendant company agreeing to change its business practices under supervision of the plaintiff agency (in this case, the FTC). The Cambridge Analytica lawsuit ended this way, with Facebook agreeing to change its data-sharing practices under the supervision of the FTC.
This path would mean the FTC creating bespoke, open-ended regulation for each covered platform. Like the first path, this could create significant scope for discretionary decision-making by the FTC and potentially allow FTC officials to impose their own, non-economic goals on these firms. And it would require costly monitoring of each firm subject to bespoke regulation to ensure that no breaches of that regulation occurred.
“economic power as inextricably political. Power in industry is the power to steer outcomes. It grants outsized control to a few, subjecting the public to unaccountable private power—and thereby threatening democratic order. The account also offers a positive vision of how economic power should be organized (decentralized and dispersed), a recognition that forms of economic power are not inevitable and instead can be restructured.” [italics added]
Though I have focused on Cicilline’s flagship bill, others grant significant new powers to the FTC, as well. The data portability and interoperability bill doesn’t actually define what “data” is; it leaves it to the FTC to “define the term ‘data’ for the purpose of implementing and enforcing this Act.” And, as I’ve written elsewhere, data interoperability needs significant ongoing regulatory oversight to work at all, a responsibility that this bill also hands to the FTC. Even a move as apparently narrow as data portability will involve a significant expansion of the FTC’s powers and give it a greater role as an ongoing economic regulator.
In his recent concurrence in Biden v. Knight, Justice Clarence Thomas sketched a roadmap for how to regulate social-media platforms. The animating factor for Thomas, much like for other conservatives, appears to be a sense that Big Tech has exhibited anti-conservative bias in its moderation decisions, most prominently by excluding former President Donald Trump from Twitter and Facebook. The opinion has predictably been greeted warmly by conservative champions of social-media regulation, who believe it shows how states and the federal government can proceed on this front.
Conservatives’ main argument has been that Big Tech needs to be reined in because it is restricting the speech of private individuals. While conservatives traditionally have defended the state-action doctrine and the right to editorial discretion, they now readily find exceptions to both in order to justify regulating social-media companies. But those two First Amendment doctrines have long enshrined an important general principle: private actors can set the rules for speech on their own property. I intend to analyze this principle from a law & economics perspective and show how it benefits society.
Who Balances the Benefits and Costs of Speech?
Like virtually any other human activity, there are benefits and costs to speech and it is ultimately subjective individual preference that determines the value that speech has. The First Amendment protects speech from governmental regulation, with only limited exceptions, but that does not mean all speech is acceptable or must be tolerated. Under the state-action doctrine, the First Amendment only prevents the government from restricting speech.
Some purported defenders of the principle of free speech no longer appear to see a distinction between restraints on speech imposed by the government and those imposed by private actors. But this is surely mistaken, as no one truly believes all speech protected by the First Amendment should be without consequence. In truth, most regulation of speech has always come by informal means—social mores enforced by dirty looks or responsive speech from others.
Moreover, property rights have long played a crucial role in determining speech rules within any given space. If a man were to come into my house and start calling my wife racial epithets, I would not only ask that person to leave but would exercise my right as a property owner to eject the trespasser—if necessary, calling the police to assist me. I similarly could not expect to go to a restaurant and yell at the top of my lungs about political issues and expect them—even as “common carriers” or places of public accommodation—to allow me to continue.
The fact that different costs and benefits must be balanced does not in itself imply who must balance them―or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved.
Knowledge and Decisions, p. 240
When it comes to speech, the balance that must be struck is between one individual’s desire for an audience and that prospective audience’s willingness to play the role. Asking government to use regulation to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors. Rather than incremental decisions regarding how and under what terms individuals may relate to one another—which can evolve over time in response to changes in what individuals find acceptable—government by its nature can only hand down categorical guidelines: “you must allow x, y, and z speech.”
This is particularly relevant in the sphere of social media. Social-media companies are multi-sided platforms. They are profit-seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users could abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social-media companies thus need to maximize the value of their platform by setting rules that keep users engaged.
In the cases of Facebook, Twitter, and YouTube, the platforms have set content-moderation standards that restrict many kinds of speech that are generally viewed negatively by users, even if the First Amendment would foreclose the government from regulating those same types of content. This is a good thing. Social-media companies balance the speech interests of different kinds of users to maximize the value of the platform and, in turn, to maximize benefits to all.
Herein lies the fundamental difference between private action and state action: one is voluntary, and the other based on coercion. If Facebook or Twitter suspends a user for violating community rules, it represents termination of a previously voluntary association. If the government kicks someone out of a public forum for expressing legal speech, that is coercion. The state-action doctrine recognizes this fundamental difference and creates a bright-line rule that courts may police when it comes to speech claims. As Sowell put it:
The courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.
Knowledge and Decisions, p. 244
Markets Produce the Best Moderation Policies
The First Amendment also protects the right of editorial discretion, which means publishers, platforms, and other speakers are free from carrying or transmitting government-compelled speech. Even a newspaper with near-monopoly power cannot be compelled by a right-of-reply statute to carry responses by political candidates to editorials it has published. In other words, not only is private regulation of speech not state action, but in many cases, private regulation is protected by the First Amendment.
There is no reason to think that social-media companies today are in a different position than was the newspaper in Miami Herald v. Tornillo. These companies must determine what, how, and where content is presented within their platform. While this right of editorial discretion protects the moderation decisions of social-media companies, its benefits accrue to society at-large.
Social-media companies’ abilities to differentiate themselves based on functionality and moderation policies are important aspects of competition among them. How each platform is used may differ depending on those factors. In fact, many consumers use multiple social-media platforms throughout the day for different purposes. Market competition, not government power, has enabled internet users (including conservatives!) to have more avenues than ever to get their message out.
Many conservatives remain unpersuaded by the power of markets in this case. They see multiple platforms all engaging in very similar content-moderation policies when it comes to certain touchpoint issues, and thus allege widespread anti-conservative bias and collusion. Neither of those claims have much factual support, but more importantly, the similarity of content-moderation standards may simply be common responses to similar demand structures—not some nefarious and conspiratorial plot.
In other words, if social-media users demand less of the kinds of content commonly considered to be hate speech, or less misinformation on certain important issues, platforms will do their best to weed those things out. Platforms won’t always get these determinations right, but it is by no means clear that forcing them to carry all “legal” speech—which would include not just misinformation and hate speech, but pornographic material, as well—would better serve social-media users. There are always alternative means to debate contestable issues of the day, even if it may be more costly to access them.
Indeed, that content-moderation policies make it more difficult to communicate some messages is precisely the point of having them. There is a subset of protected speech to which many users do not wish to be subject. Moreover, there is no inherent right to have an audience on a social-media platform.
Much of the First Amendment’s economic value lies in how it defines roles in the market for speech. As a general matter, it is not the government’s place to determine what speech should be allowed in private spaces. Instead, the private ordering of speech emerges through the application of social mores and property rights. This benefits society, as it allows individuals to create voluntary relationships built on marginal decisions about what speech is acceptable when and where, rather than centralized decisions made by a governing few and that are difficult to change over time.
Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company.
But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.
Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.
The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention).
Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:
But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:
— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.
— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.
— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.
— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.
The report thus asserts that:
The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.
That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]
What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard.
Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark.
Decisions Under Uncertainty
In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.
Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong.
Consider the following passage from FTC economist Ken Heyer’s memo:
The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]
In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.
Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?
In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today.
Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here).
Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.
To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets.
In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.
Putting Erroneous Predictions in Context
So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.
But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.
This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.
In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.
Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:
The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.
FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.
This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.
But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call:
When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.
The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:
Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”
It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.
Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation).
In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.
The FTC Lawyers’ Weak Case for Prosecuting Google
At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.
Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:
A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.
If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.
The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.
Moreover, as Ben Thompson argues in his Stratechery newsletter:
The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.
This difficulty was deftly highlighted by Heyer’s memo:
If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]
Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.
And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.
Google’s ‘revenue-sharing’ agreements
It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:
The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance.
To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).
Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:
This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.
This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:
[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.
Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.
Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):
Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.
Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.
Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system.
In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.
Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:
When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers
The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:
Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites….
…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]
More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:
A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control….
…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….
…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk?
Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time.
Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.
Competitor Harm Is Not an Indicator of the Need for Intervention
Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:
Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.
But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents.
This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:
Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives….
…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest….
…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.
Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:
They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.
Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.
When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.
But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.
The antitrust exemption in question, embodied in the Journalism Competition and Preservation Act of 2021, was introduced March 10 simultaneously in the U.S. House and Senate. The press release announcing the bill’s introduction portrayed it as a “good government” effort to help struggling newspapers in their negotiations with large digital platforms, and thereby strengthen American democracy:
We must enable news organizations to negotiate on a level playing field with the big tech companies if we want to preserve a strong and independent press[.] …
A strong, diverse, free press is critical for any successful democracy. …
Nearly 90 percent of Americans now get news while on a smartphone, computer, or tablet, according to a Pew Research Center survey conducted last year, dwarfing the number of Americans who get news via television, radio, or print media. Facebook and Google now account for the vast majority of online referrals to news sources, with the two companies also enjoying control of a majority of the online advertising market. This digital ad duopoly has directly contributed to layoffs and consolidation in the news industry, particularly for local news.
This legislation would address this imbalance by providing a safe harbor from antitrust laws so publishers can band together to negotiate with large platforms. It provides a 48-month window for companies to negotiate fair terms that would flow subscription and advertising dollars back to publishers, while protecting and preserving Americans’ right to access quality news. These negotiations would strictly benefit Americans and news publishers at-large; not just one or a few publishers.
The Journalism Competition and Preservation Act only allows coordination by news publishers if it (1) directly relates to the quality, accuracy, attribution or branding, and interoperability of news; (2) benefits the entire industry, rather than just a few publishers, and are non-discriminatory to other news publishers; and (3) is directly related to and reasonably necessary for these negotiations.
Lurking behind this public-spirited rhetoric, however, is the specter of special interest rent seeking by powerful media groups, as discussed in an insightful article by Thom Lambert. The newspaper industry is indeed struggling, but that is true overseas as well as in the United States. Competition from internet websites has greatly reduced revenues from classified and non-classified advertising. As Lambert notes, in “light of the challenges the internet has created for their advertising-focused funding model, newspapers have sought to employ the government’s coercive power to increase their revenues.”
In particular, media groups have successfully lobbied various foreign governments to impose rules requiring that Google and Facebook pay newspapers licensing fees to display content. The Australian government went even further by mandating that digital platforms share their advertising revenue with news publishers and give the publishers advance notice of any algorithm changes that could affect page rankings and displays. Media rent-seeking efforts took a different form in the United States, as Lambert explains (citations omitted):
In the United States, news publishers have sought to extract rents from digital platforms by lobbying for an exemption from the antitrust laws. Their efforts culminated in the introduction of the Journalism Competition and Preservation Act of 2018. According to a press release announcing the bill, it would allow “small publishers to band together to negotiate with dominant online platforms to improve the access to and the quality of news online.” In reality, the bill would create a four-year safe harbor for “any print or digital news organization” to jointly negotiate terms of trade with Google and Facebook. It would not apply merely to “small publishers” but would instead immunize collusive conduct by such major conglomerates as Murdoch’s News Corporation, the Walt Disney Corporation, the New York Times, Gannet Company, Bloomberg, Viacom, AT&T, and the Fox Corporation. The bill would permit news organizations to fix prices charged to digital platforms as long as negotiations with the platforms were not limited to price, were not discriminatory toward similarly situated news organizations, and somehow related to “the quality, accuracy, attribution or branding, and interoperability of news.” Given the ease of meeting that test—since news organizations could always claim that higher payments were necessary to ensure journalistic quality—the bill would enable news publishers in the United States to extract rents via collusion rather than via direct government coercion, as in Australia.
The 2021 version of the JCPA is nearly identical to the 2018 version discussed by Thom. The only substantive change is that the 2021 version strengthens the pro-cartel coalition by adding broadcasters (it applies to “any print, broadcast, or news organization”). While the JCPA plainly targets Facebook and Google (“online content distributors” with “not fewer than 1,000,000,000 monthly active users, in the aggregate, on its website”), Microsoft President Brad Smith noted in a March 12 House Antitrust Subcommittee Hearing on the bill that his company would also come under its collective-bargaining terms. Other online distributors could eventually become subject to the proposed law as well.
Purported justifications for the proposal were skillfully skewered by John Yun in a 2019 article on the substantively identical 2018 JCPA. Yun makes several salient points. First, the bill clearly shields price fixing. Second, the claim that all news organizations (in particular, small newspapers) would receive the same benefit from the bill rings hollow. The bill’s requirement that negotiations be “nondiscriminatory as to similarly situated news content creators” (emphasis added) would allow the cartel to negotiate different terms of trade for different “tiers” of organizations. Thus The New York Times and The Washington Post, say, might be part of a top tier getting the most favorable terms of trade. Third, the evidence does not support the assertion that Facebook and Google are monopolistic gateways for news outlets.
Yun concludes by summarizing the case against this legislation (citations omitted):
Put simply, the impact of the bill is to legalize a media cartel. The bill expressly allows the cartel to fix the price and set the terms of trade for all market participants. The clear goal is to transfer surplus from online platforms to news organizations, which will likely result in higher content costs for these platforms, as well as provisions that will stifle the ability to innovate. In turn, this could negatively impact quality for the users of these platforms.
Furthermore, a stated goal of the bill is to promote “quality” news and to “highlight trusted brands.” These are usually antitrust code words for favoring one group, e.g., those that are part of the News Media Alliance, while foreclosing others who are not “similarly situated.” What about the non-discrimination clause? Will it protect non-members from foreclosure? Again, a careful reading of the bill raises serious questions as to whether it will actually offer protection. The bill only ensures that the terms of the negotiations are available to all “similarly situated” news organizations. It is very easy to carve out provisions that would favor top tier members of the media cartel.
Additionally, an unintended consequence of antitrust exemptions can be that it makes the beneficiaries lax by insulating them from market competition and, ultimately, can harm the industry by delaying inevitable and difficult, but necessary, choices. There is evidence that this is what occurred with the Newspaper Preservation Act of 1970, which provided antitrust exemption to geographically proximate newspapers for joint operations.
There are very good reasons why antitrust jurisprudence reserves per se condemnation to the most egregious anticompetitive acts including the formation of cartels. Legislative attempts to circumvent the federal antitrust laws should be reserved solely for the most compelling justifications. There is little evidence that this level of justification has been met in this present circumstance.
Statutory exemptions to the antitrust laws have long been disfavored, and with good reason. As I explained in my 2005 testimony before the Antitrust Modernization Commission, such exemptions tend to foster welfare-reducing output restrictions. Also, empirical research suggests that industries sheltered from competition perform less well than those subject to competitive forces. In short, both economic theory and real-world data support a standard that requires proponents of an exemption to bear the burden of demonstrating that the exemption will benefit consumers.
This conclusion applies most strongly when an exemption would specifically authorize hard-core price fixing, as in the case with the JCPA. What’s more, the bill’s proponents have not borne the burden of justifying their pro-cartel proposal in economic welfare terms—quite the opposite. Lambert’s analysis exposes this legislation as the product of special interest rent seeking that has nothing to do with consumer welfare. And Yun’s evaluation of the bill clarifies that, not only would the JCPA foster harmful collusive pricing, but it would also harm its beneficiaries by allowing them to avoid taking steps to modernize and render themselves more efficient competitors.
In sum, though the JCPA claims to fly a “public interest” flag, it is just another private interest bill promoted by well-organized rent seekers would harm consumer welfare and undermine innovation.
Critics of big tech companies like Google and Amazon are increasingly focused on the supposed evils of “self-preferencing.” This refers to when digital platforms like Amazon Marketplace or Google Search, which connect competing services with potential customers or users, also offer (and sometimes prioritize) their own in-house products and services.
The objection, raised by several members and witnesses during a Feb. 25 hearing of the House Judiciary Committee’s antitrust subcommittee, is that it is unfair to third parties that use those sites to allow the site’s owner special competitive advantages. Is it fair, for example, for Amazon to use the data it gathers from its service to design new products if third-party merchants can’t access the same data? This seemingly intuitive complaint was the basis for the European Commission’s landmark case against Google.
But we cannot assume that something is bad for competition just because it is bad for certain competitors. A lot of unambiguously procompetitive behavior, like cutting prices, also tends to make life difficult for competitors. The same is true when a digital platform provides a service that is better than alternatives provided by the site’s third-party sellers.
It’s probably true that Amazon’s access to customer search and purchase data can help it spot products it can undercut with its own versions, driving down prices. But that’s not unusual; most retailers do this, many to a much greater extent than Amazon. For example, you can buy AmazonBasics batteries for less than half the price of branded alternatives, and they’re pretty good.
There’s no doubt this is unpleasant for merchants that have to compete with these offerings. But it is also no different from having to compete with more efficient rivals who have lower costs or better insight into consumer demand. Copying products and seeking ways to offer them with better features or at a lower price, which critics of self-preferencing highlight as a particular concern, has always been a fundamental part of market competition—indeed, it is the primary way competition occurs in most markets.
Store-branded versions of iPhone cables and Nespresso pods are certainly inconvenient for those companies, but they offer consumers cheaper alternatives. Where such copying may be problematic (say, by deterring investments in product innovations), the law awards and enforces patents and copyrights to reward novel discoveries and creative works, and trademarks to protect brand identity. But in the absence of those cases where a company has intellectual property, this is simply how competition works.
The fundamental question is “what benefits consumers?” Services like Yelp object that they cannot compete with Google when Google embeds its Google Maps box in Google Search results, while Yelp cannot do the same. But for users, the Maps box adds valuable information to the results page, making it easier to get what they want. Google is not making Yelp worse by making its own product better. Should it have to refrain from offering services that benefit its users because doing so might make competing products comparatively less attractive?
Self-preferencing also enables platforms to promote their offerings in other markets, which is often how large tech companies compete with each other. Amazon has a photo-hosting app that competes with Google Photos and Apple’s iCloud. It recently emailed its customers to promote it. That is undoubtedly self-preferencing, since other services cannot market themselves to Amazon’s customers like this, but if it makes customers aware of an alternative they might not have otherwise considered, that is good for competition.
This kind of behavior also allows companies to invest in offering services inexpensively, or for free, that they intend to monetize by preferencing their other, more profitable products. For example, Google invests in Android’s operating system and gives much of it away for free precisely because it can encourage Android customers to use the profitable Google Search service. Despite claims to the contrary, it is difficult to see this sort of cross-subsidy as harmful to consumers.
All platforms are open or closed to varying degrees. Retail “platforms,” for example, exist on a spectrum on which Craigslist is more open and neutral than eBay, which is more so than Amazon, which is itself relatively more so than, say, Walmart.com. Each position on this spectrum offers its own benefits and trade-offs for consumers. Indeed, some customers’ biggest complaint against Amazon is that it is too open, filled with third parties who leave fake reviews, offer counterfeit products, or have shoddy returns policies. Part of the role of the site is to try to correct those problems by making better rules, excluding certain sellers, or just by offering similar options directly.
Regulators and legislators often act as if the more open and neutral, the better, but customers have repeatedly shown that they often prefer less open, less neutral options. And critics of self-preferencing frequently find themselves arguing against behavior that improves consumer outcomes, because it hurts competitors. But that is the nature of competition: what’s good for consumers is frequently bad for competitors. If we have to choose, it’s consumers who should always come first.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]
U.S. antitrust regulators have a history of narrowly defining relevant markets—often to the point of absurdity—in order to create market power out of thin air. The Federal Trade Commission (FTC) famously declared that Whole Foods and Wild Oats operated in the “premium natural and organic supermarkets market”—a narrowly defined market designed to exclude other supermarkets carrying premium natural and organic foods, such as Walmart and Kroger. Similarly, for the Staples-Office Depot merger, the FTC
narrowly defined the relevant market as “office superstore” chains, which excluded general merchandisers such as Walmart, K-Mart and Target, who at the time accounted for 80% of office supply sales.
Texas Attorney General Ken Paxton’s complaint against Google’s advertising business, joined by the attorneys general of nine other states, continues this tradition of narrowing market definition to shoehorn market dominance where it may not exist.
For example, one recent paper critical of Google’s advertising business narrows the relevant market first from media advertising to digital advertising, then to the “open” supply of display ads and, finally, even further to the intermediation of the open supply of display ads. Once the market has been sufficiently narrowed, the authors conclude Google’s market share is “perhaps sufficient to confer market power.”
While whittling down market definitions may achieve the authors’ purpose of providing a roadmap to prosecute Google, one byproduct is a mishmash of market definitions that generates as many as 16 relevant markets for digital display and video advertising, in many of which Google doesn’t have anything approaching market power (and in some of which, in fact, Facebook, and not Google, is the most dominant player).
The Texas complaint engages in similar relevant-market gerrymandering. It claims that, within digital advertising, there exist several relevant markets and that Google monopolizes four of them:
Publisher ad servers, which manage the inventory of a publisher’s (e.g., a newspaper’s website or a blog) space for ads;
Display ad exchanges, the “marketplace” in which auctions directly match publishers’ selling of ad space with advertisers’ buying of ad space;
Display ad networks, which are similar to exchanges, except a network acts as an intermediary that collects ad inventory from publishers and sells it to advertisers; and
Display ad-buying tools, which include demand-side platforms that collect bids for ad placement with publishers.
The complaint alleges, “For online publishers and advertisers alike, the different online advertising formats are not interchangeable.” But this glosses over a bigger challenge for the attorneys general: Is online advertising a separate relevant market from offline advertising?
Digital advertising, of which display advertising is a small part, is only one of many channels through which companies market their products. About half of today’s advertising spending in the United States goes to digital channels, up from about 10% a decade ago. Approximately 30% of ad spending goes to television, with the remainder going to radio, newspapers, magazines, billboards and other “offline” forms of media.
Physical newspapers now account for less than 10% of total advertising spending. Traditionally, newspapers obtained substantial advertising revenues from classified ads. As internet usage increased, newspaper classifieds have been replaced by less costly and more effective internet classifieds—such as those offered by Craigslist—or targeted ads on Google Maps or Facebook.
The price of advertising has fallen steadily over the past decade, while output has risen. Spending on digital advertising in the United States grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period, the producer price index (PPI) for internet advertising sales declined by nearly 40%. Rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year.
Since 2000, advertising spending has been falling as a share of gross domestic product, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost and increasing total revenues are consistent with a growing and increasingly competitive market, rather than one of rising concentration and reduced competition.
There is little or no empirical data evaluating the extent to which online and offline advertising constitute distinct markets or the extent to which digital display is a distinct submarket of online advertising. As a result, analysis of adtech competition has relied on identifying several technical and technological factors—as well as the say-so of participants in the business—that the analysts assert distinguish online from offline and establish digital display (versus digital search) as a distinct submarket. This approach has been used and accepted, especially in cases in which pricing data has not been available.
But the pricing information that is available raises questions about the extent to which online advertising is a distinct market from offline advertising. For example, Avi Goldfarb and Catherine Tucker find that, when local regulations prohibit offline direct advertising, search advertising is more expensive, indicating that search and offline advertising are substitutes. In other research, they report that online display advertising circumvents, in part, local bans on offline billboard advertising for alcoholic beverages. In both studies, Goldfarb and Tucker conclude their results suggest online and offline advertising are substitutes. They also conclude this substitution suggests that online and offline markets should be considered together in the context of antitrust.
While this information is not sufficient to define a broader relevant market, it raises questions regarding solely relying on the technical or technological distinctions and the say-so of market participants.
In the United States, plaintiffs do not get to define the relevant market. That is up to the judge or the jury. Plaintiffs have the burden to convince the court that a proposed narrow market definition is the correct one. With strong evidence that online and offline ads are substitutes, the court should not blindly accept the gerrymandered market definitions posited by the attorneys general.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]
Judges sometimes claim that they do not pick winners when they decide antitrust cases. Nothing could be further from the truth.
Competitive conduct by its nature harms competitors, and so if antitrust were merely to prohibit harm to competitors, antitrust would then destroy what it is meant to promote.
What antitrust prohibits, therefore, is not harm to competitors but rather harm to competitors that fails to improve products. Only in this way is antitrust able to distinguish between the good firm that harms competitors by making superior products that consumers love and that competitors cannot match and the bad firm that harms competitors by degrading their products without offering consumers anything better than what came before.
That means, however, that antitrust must pick winners: antitrust must decide what is an improvement and what not. And a more popular search engine is a clear winner.
But one should not take its winningness for granted. For once upon a time there was another winner that the courts always picked, blocking antitrust case after antitrust case. Until one day the courts stopped picking it.
That was the economy of scale.
The Structure of the Google Case
Like all antitrust cases that challenge the exercise of power, the government’s case against Google alleges denial of an input to competitors in some market. Here the input is default search status in smartphones, the competitors are rival search providers, and the market is search advertising. The basic structure of the case is depicted in the figure below.
Although brought as a monopolization case under Section 2 of the Sherman Act, this is at heart an exclusive dealing case of the sort normally brought under Section 1 of the Sherman Act: the government’s core argument is that Google uses contracts with smartphone makers, pursuant to which the smartphone makers promise to make Google, and not competitors, the search default, to harm competing search advertising providers and by extension competition in the search advertising market.
The government must show anticompetitive conduct, monopoly power, and consumer harm in order to prevail.
Let us assume that there is monopoly power. The company has more than 70% of the search advertising market, which is in the zone normally required to prove that element of a monopolization claim.
The problem of anticompetitive conduct is only slightly more difficult.
Anticompetitive conduct is only ever one thing in antitrust: denial of an essential input to a competitor. There is no other way to harm rivals.
(To be sure, antitrust prohibits harm to competition, not competitors, but that means only that harm to competitors necessary but insufficient for liability. The consumer harm requirement decides whether the requisite harm to competitors is also harm to competition.)
It is not entirely clear just how important default search status really is to running a successful search engine, but let us assume that it is essential, as the government suggests.
Then the question whether Google’s contracts are anticompetitive turns on how much of the default search input Google’s contracts foreclose to rival search engines. If a lot, then the rivals are badly harmed. If a little, then there may be no harm at all.
The answer here is that there is a lot of foreclosure, at least if the government’s complaint is to be believed. Through its contracts with Apple and makers of Android phones, Google has foreclosed default search status to rivals on virtually every single smartphone.
That leaves consumer harm. And here is where things get iffy.
Usage as a Product Improvement: A Very Convenient Argument
The inquiry into consumer harm evokes measurements of the difference between demand curves and price lines, or extrapolations of compensating and equivalent variation using indifference curves painstakingly pieced together based on the assumptions of revealed preference.
But while the parties may pay experts plenty to spin such yarns, and judges may pretend to listen to them, in the end, for the judges, it always comes down to one question only: did exclusive dealing improve the product?
If it did, then the judge assumes that the contracts made consumers better off and the defendant wins. And if it did not, then off with their heads.
So, does foreclosing all this default search space to competitors make Google search advertising more valuable to advertisers?
Those who leap to Google’s defense say yes, for default search status increases the number of people who use Google’s search engine. And the more people use Google’s search engine, the more Google learns about how best to answer search queries and which advertisements will most interest which searchers. And that ensures that even more people will use Google’s search engine, and that Google will do an even better job of targeting ads on its search engine.
And that in turn makes Google’s search advertising even better: able to reach more people and to target ads more effectively to them.
None of that would happen if defaults were set to other engines and users spurned Google, and so foreclosing default search space to rivals undoubtedly improves Google’s product.
This is a nice argument. Indeed, it is almost too nice, for it seems to suggest that almost anything Google might do to steer users away from competitors and to itself deserves antitrust immunity. Suppose Google were to brandish arms to induce you to run your next search on Google. That would be a crime, but, on this account, not an antitrust crime. For getting you to use Google does make Google better.
The argument that locking up users improves the product is of potential use not just to Google but to any of the many tech companies that run on advertising—Facebook being a notable example—so it potentially immunizes an entire business model from antitrust scrutiny.
It turns out that has happened before.
Economies of Scale as a Product Improvement: Once a Convenient Argument
Once upon a time, antitrust exempted another kind of business for which products improve the more people used them. The business was industrial production, and it differs from online advertising only in the irrelevant characteristic that the improvement that comes with expanding use is not in the quality of the product but in the cost per unit of producing it.
The hallmark of the industrial enterprise is high fixed costs and low marginal costs. The textile mill differs from pre-industrial piecework weaving in that once a $10 million investment in machinery has been made, the mill can churn out yard after yard of cloth for pennies. The pieceworker, by contrast, makes a relatively small up-front investment—the cost of raising up the hovel in which she labors and making her few tools—but spends the same large amount of time to produce each new yard of cloth.
Large fixed costs and low marginal costs lie at the heart of the bounty of the modern age: the more you produce, the lower the unit cost, and so the lower the price at which you can sell your product. This is a recipe for plenty.
But it also means that, so long as consumer demand in a given market is lower than the capacity of any particular plant, driving buyers to a particular seller and away from competitors always improves the product, in the sense that it enables the firm to increase volume and reduce unit cost, and therefore to sell the product at a lower price.
If the promise of the modern age is goods at low prices, then the implication is that antitrust should never punish firms for driving rivals from the market and taking over their customers. Indeed, efficiency requires that only one firm should ever produce in any given market, at least in any market for which a single plant is capable of serving all customers.
For antitrust in the late 19th and early 20th centuries, beguiled by this advantage to size, exclusive dealing, refusals to deal, even the knife in a competitor’s back: whether these ran afoul of other areas of law or not, it was all for the better because it allowed industrial enterprises to achieve economies of scale.
It is no accident that, a few notable triumphs aside, antitrust did not come into its own until the mid-1930s, 40 years after its inception, on the heels of an intellectual revolution that explained, for the first time, why it might actually be better for consumers to have more than one seller in a market.
These theories suggested that consumers might care as much about product quality as they do about product cost, and indeed would be willing to abandon a low-cost product for a higher-quality, albeit more expensive, one.
From this perspective, the world of economies of scale and monopoly production was the drab world of Soviet state-owned enterprises churning out one type of shoe, one brand of cleaning detergent, and so on.
The world of capitalism and technological advance, by contrast, was one in which numerous firms produced batches of differentiated products in amounts sometimes too small fully to realize all scale economies, but for which consumers were nevertheless willing to pay because the products better fit their preferences.
What is more, the striving of monopolistically competitive firms to lure away each other’s customers with products that better fit their tastes led to disruptive innovation— “creative destruction” was Schumpeter’s famous term for it—that brought about not just different flavors of the same basic concept but entirely new concepts. The competition to create a better flip phone, for example, would lead inevitably to a whole new paradigm, the smartphone.
This reasoning combined with work in the 1940s and 1950s on economic growth that quantified for the first time the key role played by technological change in the vigor of capitalist economies—the famous Solow residual—to suggest that product improvements, and not the cost reductions that come from capital accumulation and their associated economies of scale, create the lion’s share of consumer welfare. Innovation, not scale, was king.
Antitrust responded by, for the first time in its history, deciding between kinds of product improvements, rather than just in favor of improvements, casting economies of scale out of the category of improvements subject to antitrust immunity, while keeping quality improvements immune.
Casting economies of scale out of the protected product improvement category gave antitrust something to do for the first time. It meant that big firms had to plead more than just the cost advantages of being big in order to obtain license to push their rivals around. And government could now start reliably to win cases, rather than just the odd cause célèbre.
It is this intellectual watershed, and not Thurman Arnold’s tenacity, that was responsible for antitrust’s emergence as a force after World War Two.
Usage-Based Improvements Are Not Like Economies of Scale
The improvements in advertising that come from user growth fall squarely on the quality side of the ledger—the value they create is not due to the ability to average production costs over more ad buyers—and so they count as the kind of product improvements that antitrust continues to immunize today.
But given the pervasiveness of this mode of product improvement in the tech economy—the fact that virtually any tech firm that sells advertising can claim to be improving a product by driving users to itself and away from competitors—it is worth asking whether we have not reached a new stage in economic development in which this form of product improvement ought, like economies of scale, to be denied protection.
Shouldn’t the courts demand more and better innovation of big tech firms than just the same old big-data-driven improvements they serve up year after year?
Galling as it may be to those who, like myself, would like to see more vigorous antitrust enforcement in general, the answer would seem to be “no.” For what induced the courts to abandon antitrust immunity for economies of scale in the mid-20th century was not the mere fact that immunizing economies of scale paralyzed antitrust. Smashing big firms is not, after all, an end in itself.
Instead, monopolistic competition, creative destruction and the Solow residual induced the change, because they suggested both that other kinds of product improvement are more important than economies of scale and, crucially, that protecting economies of scale impedes development of those other kinds of improvements.
A big firm that excludes competitors in order to reach scale economies not only excludes competitors who might have produced an identical or near-identical product, but also excludes competitors who might have produced a better-quality product, one that consumers would have preferred to purchase even at a higher price.
To cast usage-based improvements out of the product improvement fold, a case must be made that excluding competitors in order to pursue such improvements will block a different kind of product improvement that contributes even more to consumer welfare.
If we could say, for example, that suppressing search competitors suppresses more-innovative search engines that ad buyers would prefer, even if those innovative search engines were to lack the advantages that come from having a large user base, then a case might be made that user growth should no longer count as a product improvement immune from antitrust scrutiny.
And even then, the case against usage-based improvements would need to be general enough to justify an epochal change in policy, rather than be limited to a particular technology in a particular lawsuit. For the courts hate to balance in individual cases, statements to the contrary in their published opinions notwithstanding.
But there is nothing in the Google complaint, much less the literature, to suggest that usage-based improvements are problematic in this way. Indeed, much of the value created by the information revolution seems to inhere precisely in its ability to centralize usage.
Americans Keep Voting to Centralize the Internet
In the early days of the internet, theorists mistook its decentralized architecture for a feature, rather than a bug. But internet users have since shown, time and again, that they believe the opposite.
For example, the basic protocols governing email were engineered to allow every American to run his own personal email server.
But Americans hated the freedom that created—not least the spam—and opted instead to get their email from a single server: the one run by Google as Gmail.
The basic protocols governing web traffic were also designed to allow every American to run whatever other communications services he wished—chat, video chat, RSS, webpages—on his own private server in distributed fashion.
But Americans hated the freedom that created—not least having to build and rebuild friend networks across platforms–—and they voted instead overwhelmingly to get their social media from a single server: Facebook.
Indeed, the basic protocols governing internet traffic were designed to allow every business to store and share its own data from its own computers, in whatever form.
But American businesses hated that freedom—not least the cost of having to buy and service their own data storage machines—and instead 40% of the internet is now stored and served from Amazon Web Services.
Similarly, advertisers have the option of placing advertisements on the myriad independently-run websites that make up the internet—known in the business as the “open web”—by placing orders through competitive ad exchanges. But advertisers have instead voted mostly to place ads on the handful of highly centralized platforms known as “walled gardens,” including Facebook, Google’s YouTube and, of course, Google Search.
The communications revolution, they say, is all about “bringing people together.” It turns out that’s true.
And that Google should win on consumer harm.
Remember the Telephone
Indeed, the same mid-20th century antitrust that thought so little of economies of scale as a defense immunized usage-based improvements when it encountered them in that most important of internet precursors: the telephone.
The telephone, like most internet services, gets better as usage increases. The more people are on a particular telephone network, the more valuable the network becomes to subscribers.
Just as with today’s internet services, the advantage of a large user base drove centralization of telephone services a century ago into the hands of a single firm: AT&T. Aside from a few business executives who liked the look of a desk full of handsets, consumers wanted one phone line that they could use to call everyone.
Although the government came close to breaking AT&T up in the early 20th century, the government eventually backed off, because a phone system in which you must subscribe to the right carrier to reach a friend just doesn’t make sense.
Instead, Congress and state legislatures stepped in to take the edge off monopoly by regulating phone pricing. And when antitrust finally did break AT&T up in 1982, it did so in a distinctly regulatory fashion, requiring that AT&T’s parts connect each other’s phone calls, something that Congress reinforced in the Telecommunications Act of 1996.
The message was clear: the sort of usage-based improvements one finds in communications are real product improvements. And antitrust can only intervene if it has a way to preserve them.
The equivalent of interconnection in search, that the benefits of usage, in the form of data and attention, be shared among competing search providers, might be feasible. But it is hard to imagine the court in the Google case ordering interconnection without the benefit of decades of regulatory experience with the defendant’s operations that the district court in 1982 could draw upon in the AT&T case.
The solution for the tech giants today is the same as the solution for AT&T a century ago: to regulate rather than to antitrust.
Microsoft Not to the Contrary, Because Users Were in Common
Parallels to the government’s 1990s-era antitrust case against Microsoft are not to the contrary.
As Sam Weinstein has pointed out to me, Microsoft, like Google, was at heart an exclusive dealing case: Microsoft contracted with computer manufacturers to prevent Netscape Navigator, an early web browser, from serving as the default web browser on Windows PCs.
That prevented Netscape, the argument went, from growing to compete with Windows in the operating system market, much the way the Google’s Chrome browser has become a substitute for Windows on low-end notebook computers today.
The D.C. Circuit agreed that default status was an essential input for Netscape as it sought eventually to compete with Windows in the operating system market.
The court also accepted the argument that the exclusive dealing did not improve Microsoft’s operating system product.
This at first seems to contradict the notion that usage improves products, for, like search advertising, operating systems get better as their user bases increase. The more people use an operating system, the more application developers are willing to write for the system, and the better the system therefore becomes.
It seems to follow that keeping competitors off competing operating systems and on Windows made Windows better. If the court nevertheless held Microsoft liable, it must be because the court refused to extend antitrust immunity to usage-based improvements.
The trouble with this line of argument is that it ignores the peculiar thing about the Microsoft case: that while the government alleged that Netscape was a potential competitor of Windows, Netscape was also an application that ran on Windows.
That means that, unlike Google and rival search engines, Windows and Netscape shared users.
So, Microsoft’s exclusive dealing did not increase its user base and therefore could not have improved Windows, at least not by making Windows more appealing for applications developers. Driving Netscape from Windows did not enable developers to reach even one more user. Conversely, allowing Netscape to be the default browser on Windows would not have reduced the number of Windows users, because Netscape ran on Windows.
By contrast, a user who runs a search in Bing does not run the same search simultaneously in Google, and so Bing users are not Google users. Google’s exclusive dealing therefore increases its user base and improves Google’s product, whereas Microsoft’s exclusive dealing served only to reduce Netscape’s user base and degrade Netscape’s product.
Indeed, if letting Netscape be the default browser on Windows was a threat to Windows, it was not because it prevented Microsoft from improving its product, but because Netscape might eventually have become an operating system, and indeed a better operating system, than Windows, and consumers and developers, who could be on both at the same time if they wished, might have nevertheless chosen eventually to go with Netscape alone.
Though it does not help the government in the Google case, Microsoft still does offer a beacon of hope for those concerned about size, for Microsoft’s subsequent history reminds us that yesterday’s behemoth is often today’s also ran.
And the favorable settlement terms Microsoft ultimately used to escape real consequences for its conduct 20 years ago imply that, at least in high-tech markets, we don’t always need antitrust for that to be true.
This week the Senate will hold a hearing into potential anticompetitive conduct by Google in its display advertising business—the “stack” of products that it offers to advertisers seeking to place display ads on third-party websites. It is also widely reported that the Department of Justice is preparing a lawsuit against Google that will likely include allegations of anticompetitive behavior in this market, and is likely to be joined by a number of state attorneys general in that lawsuit. Meanwhile, several papers have been published detailing these allegations.
This aspect of digital advertising can be incredibly complex and difficult to understand. Here we explain how display advertising fits in the broader digital advertising market, describe how display advertising works, consider the main allegations against Google, and explain why Google’s critics are misguided to focus on antitrust as a solution to alleged problems in the market (even if those allegations turn out to be correct).
Display advertising in context
Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.
Display advertising on third-party websites is only a small subsection of the digital advertising market, comprising approximately 15-20% of digital advertising spending in the US. The rest of the digital advertising market is made up of ads on search results pages on sites like Google, Amazon and Kayak, on people’s Instagram and Facebook feeds, listings on sites like Zillow (for houses) or Craigslist, referral fees paid to price comparison websites for things like health insurance, audio and visual ads on services like Spotify and Hulu, and sponsored content from influencers and bloggers who will promote products to their fans.
And digital advertising itself is only one of many channels through which companies can market their products. About 53% of total advertising spending in the United States goes on digital channels, with 30% going on TV advertising and the rest on things like radio ads, billboards and other more traditional forms of advertising. A few people still even read physical newspapers and the ads they contain, although physical newspapers’ bigger money makers have traditionally been classified ads, which have been replaced by less costly and more effective internet classifieds, such as those offered by Craigslist, or targeted ads on Google Maps or Facebook.
Indeed, it should be noted that advertising itself is only part of the larger marketing market of which non-advertising marketing communication—e.g., events, sales promotion, direct marketing, telemarketing, product placement—is as big a part as is advertising (each is roughly $500bn globally); it just hasn’t been as thoroughly disrupted by the Internet yet. But it is a mistake to assume that digital advertising is not a part of this broader market. And of that $1tr global market, Internet advertising in total occupies only about 18%—and thus display advertising only about 3%.
Ad placement is only one part of the cost of digital advertising. An advertiser trying to persuade people to buy its product must also do market research and analytics to find out who its target market is and what they want. Moreover, there are the costs of designing and managing a marketing campaign and additional costs to analyze and evaluate the effectiveness of the campaign.
Nevertheless, one of the most straightforward ways to earn money from a website is to show ads to readers alongside the publisher’s content. To satisfy publishers’ demand for advertising revenues, many services have arisen to automate and simplify the placement of and payment for ad space on publishers’ websites. Google plays a large role in providing these services—what is referred to as “open display” advertising. And it is Google’s substantial role in this space that has sparked speculation and concern among antitrust watchdogs and enforcement authorities.
Before delving into the open display advertising market, a quick note about terms. In these discussions, “advertisers” are businesses that are trying to sell people stuff. Advertisers include large firms such as Best Buy and Disney and small businesses like the local plumber or financial adviser. “Publishers” are websites that carry those ads, and publish content that users want to read. Note that the term “publisher” refers to all websites regardless of the things they’re carrying: a blog about the best way to clean stains out of household appliances is a “publisher” just as much as the New York Times is.
Under this broad definition, Facebook, Instagram, and YouTube are also considered publishers. In their role as publishers, they have a common goal: to provide content that attracts users to their pages who will act on the advertising displayed. “Users” are you and me—the people who want to read publishers’ content, and to whom advertisers want to show ads. Finally, “intermediaries” are the digital businesses, like Google, that sit in between the advertisers and the publishers, allowing them to do business with each other without ever meeting or speaking.
The display advertising market
If you’re an advertiser, display advertising works like this: your company—one that sells shoes, let’s say—wants to reach a certain kind of person and tell her about the company’s shoes. These shoes are comfortable, stylish, and inexpensive. You use a tool like Google Ads (or, if it’s a big company and you want a more expansive campaign over which you have more control, Google Marketing Platform) to design and upload an ad, and tell Google about the people you want to read—their age and location, say, and/or characterizations of their past browsing and searching habits (“interested in sports”).
Using that information, Google finds ad space on websites whose audiences match the people you want to target. This ad space is auctioned off to the highest bidder among the range of companies vying, with your shoe company, to reach users matching the characteristics of the website’s users. Thanks to tracking data, it doesn’t just have to be sports-relevant websites: as a user browses sports-related sites on the web, her browser picks up files (cookies) that will tag her as someone potentially interested in sports apparel for targeting later.
So a user might look at a sports website and then later go to a recipe blog, and there receive the shoes ad on the basis of her earlier browsing. You, the shoe seller, hope that she will either click through and buy (or at least consider buying) the shoes when she sees those ads, but one of the benefits of display advertising over search advertising is that—as with TV ads or billboard ads—just seeing the ad will make her aware of the product and potentially more likely to buy it later. Advertisers thus sometimes pay on the basis of clicks, sometimes on the basis of views, and sometimes on the basis of conversion (when a consumer takes an action of some sort, such as making a purchase or filling out a form).
That’s the advertiser’s perspective. From the publisher’s perspective—the owner of that recipe blog, let’s say—you want to auction ad space off to advertisers like that shoe company. In that case, you go to an ad server—Google’s product is called AdSense—give them a little bit of information about your site, and add some html code to your website. These ad servers gather information about your content (e.g., by looking at keywords you use) and your readers (e.g., by looking at what websites they’ve used in the past to make guesses about what they’ll be interested in) and places relevant ads next to and among your content. If they click, lucky you—you’ll get paid a few cents or dollars.
Apart from privacy concerns about the tracking of users, the really tricky and controversial part here concerns the way scarce advertising space is allocated. Most of the time, it’s done through auctions that happen in real time: each time a user loads a website, an auction is held in a fraction of a second to decide which advertiser gets to display an ad. The longer this process takes, the slower pages load and the more likely users are to get frustrated and go somewhere else.
As well as the service hosting the auction, there are lots of little functions that different companies perform that make the auction and placement process smoother. Some fear that by offering a very popular product integrated end to end, Google’s “stack” of advertising products can bias auctions in favour of its own products. There’s also speculation that Google’s product is so tightly integrated and so effective at using data to match users and advertisers that it is not viable for smaller rivals to compete.
We’ll discuss this speculation and fear in more detail below. But it’s worth bearing in mind that this kind of real-time bidding for ad placement was not always the norm, and is not the only way that websites display ads to their users even today. Big advertisers and websites often deal with each other directly. As with, say, TV advertising, large companies advertising often have a good idea about the people they want to reach. And big publishers (like popular news websites) often have a good idea about who their readers are. For example, big brands often want to push a message to a large number of people across different customer types as part of a broader ad campaign.
Of these kinds of direct sales, sometimes the space is bought outright, in advance, and reserved for those advertisers. In most cases, direct sales are run through limited, intermediated auction services that are not open to the general market. Put together, these kinds of direct ad buys account for close to 70% of total US display advertising spending. The remainder—the stuff that’s left over after these kinds of sales have been done—is typically sold through the real-time, open display auctions described above.
Different adtech products compete on their ability to target customers effectively, to serve ads quickly (since any delay in the auction and ad placement process slows down page load times for users), and to do so inexpensively. All else equal (including the effectiveness of the ad placement), advertisers want to pay the lowest possible price to place an ad. Similarly, publishers want to receive the highest possible price to display an ad. As a result, both advertisers and publishers have a keen interest in reducing the intermediary’s “take” of the ad spending.
This is all a simplification of how the market works. There is not one single auction house for ad space—in practice, many advertisers and publishers end up having to use lots of different auctions to find the best price. As the market evolved to reach this state from the early days of direct ad buys, new functions that added efficiency to the market emerged.
In the early years of ad display auctions, individual processes in the stack were performed by numerous competing companies. Through a process of “vertical integration” some companies, such as Google, brought these different processes under the same roof, with the expectation that integration would streamline the stack and make the selling and placement of ads more efficient and effective. The process of vertical integration in pursuit of efficiency has led to a more consolidated market in which Google is the largest player, offering simple, integrated ad buying products to advertisers and ad selling products to publishers.
Google is by no means the only integrated adtech service provider, however: Facebook, Amazon, Verizon, AT&T/Xandr, theTradeDesk, LumenAd, Taboola and others also provide end-to-end adtech services. But, in the market for open auction placement on third-party websites, Google is the biggest.
The cases against Google
The UK’s Competition and Markets Authority (CMA) carried out a formal study into the digital advertising market between 2019 and 2020, issuing its final report in July of this year. Although also encompassing Google’s Search advertising business and Facebook’s display advertising business (both of which relate to ads on those companies “owned and operated” websites and apps), the CMA study involved the most detailed independent review of Google’s open display advertising business to date.
That study did not lead to any competition enforcement proceedings, but it did conclude that Google’s vertically integrated products led to conflicts of interest that could lead it to behaving in ways that did not benefit the advertisers and publishers that use it. One example was Google’s withholding of certain data from publishers that would make it easier for them to use other ad selling products; another was the practice of setting price floors that allegedly led advertisers to pay more than they would otherwise.
Instead the CMA recommended the setting up of a “Digital Markets Unit” (DMU) that could regulate digital markets in general, and a code of conduct for Google and Facebook (and perhaps other large tech platforms) intended to govern their dealings with smaller customers.
The CMA’s analysis is flawed, however. For instance, it makes big assumptions about the dependency of advertisers on display advertising, largely assuming that they would not switch to other forms of advertising if prices rose, and it is light on economics. But factually it is the most comprehensively researched investigation into digital advertising yet published.
While the Scott Morton and Dinielli paper is extremely broad, it also suffers from a number of problems.
One, because it was released before the CMA’s final report, it is largely based on the interim report released months earlier by the CMA, halfway through the market study in December 2019. This means that several of its claims are out of date. For example, it makes much of the possibility raised by the CMA in its interim report that Google may take a larger cut of advertising spending than its competitors, and claims made in another report that Google introduces “hidden” fees that increases the overall cut it takes from ad auctions.
But in the final report, after further investigation, the CMA concludes that this is not the case. In the final report, the CMA describes its analysis of all Google Ad Manager open auctions related to UK web traffic during the period between 8–14 March 2020 (involving billions of auctions). This, according to the CMA, allowed it to observe any possible “hidden” fees as well. The CMA concludes:
Our analysis found that, in transactions where both Google Ads and Ad Manager (AdX) are used, Google’s overall take rate is approximately 30% of advertisers’ spend. This is broadly in line with (or slightly lower than) our aggregate market-wide fee estimate outlined above. We also calculated the margin between the winning bid and the second highest bid in AdX for Google and non-Google DSPs, to test whether Google was systematically able to win with a lower margin over the second highest bid (which might have indicated that they were able to use their data advantage to extract additional hidden fees). We found that Google’s average winning margin was similar to that of non-Google DSPs. Overall, this evidence does not indicate that Google is currently extracting significant hidden fees. As noted below, however, it retains the ability and incentive to do so. (p. 275, emphasis added)
Scott Morton and Dinielli also misquote and/or misunderstand important sections of the CMA interim report as relating to display advertising when, in fact, they relate to search. For example, Scott Morton and Dinielli write that the “CMA concluded that Google has nearly insurmountable advantages in access to location data, due to the location information [uniquely available to it from other sources].” (p. 15). The CMA never makes any claim of “insurmountable advantage,” however. Rather, to support the claim, Scott Morton and Dinielli cite to a portion of the CMA interim report recounting a suggestion made by Microsoft regarding the “critical” value of location data in providing relevant advertising.
But that portion of the report, as well as the suggestion made by Microsoft, is about search advertising. While location data may also be valuable for display advertising, it is not clear that the GPS-level data that is so valuable in providing mobile search ad listings (for a nearby cafe or restaurant, say) is particularly useful for display advertising, which may be just as well-targeted by less granular, city- or county-level location data, which is readily available from a number of sources. In any case, Scott Morton and Dinielli are simply wrong to use a suggestion offered by Microsoft relating to search advertising to demonstrate the veracity of an assertion about a conclusion drawn by the CMA regarding display advertising.
Scott Morton and Dinielli also confusingly word their own judgements about Google’s conduct in ways that could be misinterpreted as conclusions by the CMA:
The CMA reports that Google has implemented an anticompetitive sales strategy on the publisher ad server end of the intermediation chain. Specifically, after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. (p. 20)
In fact, the CMA does not conclude that Google lowering its prices was an “anticompetitive sales strategy”—it does not use these words at all—and what Scott Morton and Dinielli are referring to is a claim by a rival ad server business, Smart, that Google cutting its prices after acquiring Doubleclick led to Google expanding its market share. Apart from the misleading wording, it is unclear why a competition authority should consider it to be “anticompetitive” when prices are falling and kept low, and—as Smart reported to the CMA—its competitor’s response is to enhance its own offering.
The case that remains
Stripping away the elements of Scott Morton and Dinielli’s case that seem unsubstantiated by a more careful reading of the CMA reports, and with the benefit of the findings in the CMA’s final report, we are left with a case that argues that Google self-preferences to an unreasonable extent, giving itself a product that is as successful as it is in display advertising only because of Google’s unique ability to gain advantage from its other products that have little to do with display advertising. Because of this self-preferencing, they might argue, innovative new entrants cannot compete on an equal footing, so the market loses out on incremental competition because of the advantages Google gets from being the world’s biggest search company, owning YouTube, running Google Maps and Google Cloud, and so on.
The most significant examples of this are Google’s use of data from other products—like location data from Maps or viewing history from YouTube—to target ads more effectively; its ability to enable advertisers placing search ads to easily place display ads through the same interface; its introduction of faster and more efficient auction processes that sidestep the existing tools developed by other third-party ad exchanges; and its design of its own tool (“open bidding”) for aggregating auction bids for advertising space to compete with (rather than incorporate) an alternative tool (“header bidding”) that is arguably faster, but costs more money to use.
These allegations require detailed consideration, and in a future paper we will attempt to assess them in detail. But in thinking about them now it may be useful to consider the remedies that could be imposed to address them, assuming they do diminish the ability of rivals to compete with Google: what possible interventions we could make in order to make the market work better for advertisers, publishers, and users.
We can think of remedies as falling into two broad buckets: remedies that stop Google from doing things that improve the quality of its own offerings, thus making it harder for others to keep up; and remedies that require it to help rivals improve their products in ways otherwise accessible only to Google (e.g., by making Google’s products interoperable with third-party services) without inherently diminishing the quality of Google’s own products.
The first camp of these, what we might call “status quo minus,” includes rules banning Google from using data from its other products or offering single order forms for advertisers, or, in the extreme, a structural remedy that “breaks up” Google by either forcing it to sell off its display ad business altogether or to sell off elements of it.
What is striking about these kinds of interventions is that all of them “work” by making Google worse for those that use it. Restrictions on Google’s ability to use data from other products, for example, will make its service more expensive and less effective for those who use it. Ads will be less well-targeted and therefore less effective. This will lead to lower bids from advertisers. Lower ad prices will be transmitted through the auction process to produce lower payments for publishers. Reduced publisher revenues will mean some content providers exit. Users will thus be confronted with less available content and ads that are less relevant to them and thus, presumably, more annoying. In other words: No one will be better off, and most likely everyone will be worse off.
The reason a “single order form” helps Google is that it is useful to advertisers, the same way it’s useful to be able to buy all your groceries at one store instead of lots of different ones. Similarly, vertical integration in the “ad stack” allows for a faster, cheaper, and simpler product for users on all sides of the market. A different kind of integration that has been criticized by others, where third-party intermediaries can bid more quickly if they host on Google Cloud, benefits publishers and users because it speeds up auction time, allowing websites to load faster. So does Google’s unified alternative to “header bidding,” giving a speed boost that is apparently valuable enough to publishers that they will pay for it.
So who would benefit from stopping Google from doing these things, or even forcing Google to sell its operations in this area? Not advertisers or publishers. Maybe Google’s rival ad intermediaries would; presumably, artificially hamstringing Google’s products would make it easier for them to compete with Google. But if so, it’s difficult to see how this would be an overall improvement. It is even harder to see how this would improve the competitive process—the very goal of antitrust. Rather, any increase in the competitiveness of rivals would result not from making their products better, but from making Google’s product worse. That is a weakening of competition, not its promotion.
On the other hand, interventions that aim to make Google’s products more interoperable at least do not fall prey to this problem. Such “status quo plus” interventions would aim to take the benefits of Google’s products and innovations and allow more companies to use them to improve their own competing products. Not surprisingly, such interventions would be more in line with the conclusions the CMA came to than the divestitures and operating restrictions proposed by Scott Morton and Dinielli, as well as (reportedly) state attorneys general considering a case against Google.
But mandated interoperability raises a host of different concerns: extensive and uncertain rulemaking, ongoing regulatory oversight, and, likely, price controls, all of which would limit Google’s ability to experiment with and improve its products. The history of such mandated duties to deal or compulsory licenses is a troubled one, at best. But even if, for the sake of argument, we concluded that these kinds of remedies were desirable, they are difficult to impose via an antitrust lawsuit of the kind that the Department of Justice is expected to launch. Most importantly, if the conclusion of Google’s critics is that Google’s main offense is offering a product that is just too good to compete with without regulating it like a utility, with all the costs to innovation that that would entail, maybe we ought to think twice about whether an antitrust intervention is really worth it at all.
As the initial shock of the COVID quarantine wanes, the Techlash waxes again bringing with it a raft of renewed legislative proposals to take on Big Tech. Prominent among these is the EARN IT Act (the Act), a bipartisan proposal to create a new national commission responsible for proposing best practices designed to mitigate the proliferation of child sexual abuse material (CSAM) online. The Act’s proposal is seemingly simple, but its fallout would be anything but.
Section 230 of the Communications Decency Act currently provides online services like Facebook and Google with a robust protection from liability that could arise as a result of the behavior of their users. Under the Act, this liability immunity would be conditioned on compliance with “best practices” that are produced by the new commission and adopted by Congress.
Supporters of the Act believe that the best practices are necessary in order to ensure that platform companies effectively police CSAM. While critics of the Act assert that it is merely a backdoor for law enforcement to achieve its long-sought goal of defeating strong encryption.
The truth of EARN IT—and how best to police CSAM—is more complicated. Ultimately, Congress needs to be very careful not to exceed its institutional capabilities by allowing the new commission to venture into areas beyond its (and Congress’s) expertise.
More can be done about illegal conduct online
On its face, conditioning Section 230’s liability protections on certain platform conduct is not necessarily objectionable. There is undoubtedly some abuse of services online, and it is also entirely possible that the incentives for finding and policing CSAM are not perfectly aligned with other conflicting incentives private actors face. It is, of course, first the responsibility of the government to prevent crime, but it is also consistent with past practice to expect private actors to assist such policing when feasible.
By the same token, an immunity shield is necessary in some form to facilitate user generated communications and content at scale. Certainly in 1996 (when Section 230 was enacted), firms facing conflicting liability standards required some degree of immunity in order to launch their services. Today, the control of runaway liability remains important as billions of user interactions take place on platforms daily. Related, the liability shield also operates as a way to promote good samaritan self-policing—a measure that surely helps avoid actual censorship by governments, as opposed to the spurious claims made by those like Senator Hawley.
In this context, the Act is ambiguous. It creates a commission composed of a fairly wide cross-section of interested parties—from law enforcement, to victims, to platforms, to legal and technical experts—to recommend best practices. That hardly seems a bad thing, as more minds considering how to design a uniform approach to controlling CSAM would be beneficial—at least theoretically.
In practice, however, there are real pitfalls to imbuing any group of such thinkers—especially ones selected by political actors—with an actual or de facto final say over such practices. Much of this domain will continue to be mercurial, the rules necessary for one type of platform may not translate well into general principles, and it is possible that a public board will make recommendations that quickly tax Congress’s institutional limits. To the extent possible, Congress should be looking at ways to encourage private firms to work together to develop best practices in light of their unique knowledge about their products and their businesses.
In fact, Facebook has already begun experimenting with an analogous idea in its recently announced Oversight Board. There, Facebook is developing a governance structure by giving the Oversight Board the ability to review content moderation decisions on the Facebook platform.
So far as the commission created by the Act works to create best practices that align the incentives of firms with the removal of CSAM, it has a lot to offer. Yet, a better solution than the Act would be for Congress to establish policy that works with the private processes already in development.
Short of a more ideal solution, it is critical, however, that the Act establish the boundaries of the commission’s remit very clearly and keep it from venturing into technical areas outside of its expertise.
The complicated problem of encryption (and technology)
The Act has a major problem insofar as the commission has a fairly open ended remit to recommend best practices, and this liberality can ultimately result in dangerous unintended consequences.
The Act only calls for two out of nineteen members to have some form of computer science background. A panel of non-technical experts should not design any technology—encryption or otherwise.
To be sure, there are some interesting proposals to facilitate access to encrypted materials (notably, multi-key escrow systems and self-escrow). But such recommendations are beyond the scope of what the commission can responsibly proffer.
If Congress proceeds with the Act, it should put an explicit prohibition in the law preventing the new commission from recommending rules that would interfere with the design of complex technology, such as by recommending that encryption be weakened to provide access to law enforcement, mandating particular network architectures, or modifying the technical details of data storage.
Congress is right to consider if there is better policy to be had for aligning the incentives of the platforms with the deterrence of CSAM—including possible conditional access to Section 230’s liability shield.But just because there is a policy balance to be struck between policing CSAM and platform liability protection doesn’t mean that the new commission is suited to vetting, adopting and updating technical standards – it clearly isn’t. Conversely, to the extent that encryption and similarly complex technologies could be subject to broad policy change it should be through an explicit and considered democratic process, and not as a by-product of the Act.
This is the third in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here, and the second here). It draws on research from a soon-to-be published ICLE white paper.
(Comparison of Google and Apple’s smartphone business models. Red $ symbols represent money invested; Green $ symbols represent sources of revenue; Black lines show the extent of Google and Apple’s control over their respective platforms)
For the third in my series of posts about the Google Android decision, I will delve into the theories of harm identified by the Commission.
The big picture is that the Commission’s analysis was particularly one-sided. The Commission failed to adequately account for the complex business challenges that Google faced – such as monetizing the Android platform and shielding it from fragmentation. To make matters worse, its decision rests on dubious factual conclusions and extrapolations. The result is a highly unbalanced assessment that could ultimately hamstring Google and prevent it from effectively competing with its smartphone rivals, Apple in particular.
1. Tying without foreclosure
The first theory of harm identified by the Commission concerned the tying of Google’s Search app with the Google Play app, and of Google’s Chrome app with both the Google Play and Google Search apps.
Oversimplifying, Google required its OEMs to choose between either pre-installing a bundle of Google applications, or forgoing some of the most important ones (notably Google Play). The Commission argued that this gave Google a competitive advantage that rivals could not emulate (even though Google’s terms did not preclude OEMs from simultaneously pre-installing rival web browsers and search apps).
To support this conclusion, the Commission notably asserted that no alternative distribution channel would enable rivals to offset the competitive advantage that Google obtained from tying. This finding is, at best, dubious.
For a start, the Commission claimed that user downloads were not a viable alternative distribution channel, even though roughly 250 million apps are downloaded on Google’s Play store every day.
The Commission sought to overcome this inconvenient statistic by arguing that Android users were unlikely to download apps that duplicated the functionalities of a pre-installed app – why download a new browser if there is already one on the user’s phone?
But this reasoning is far from watertight. For instance, the 17th most-downloaded Android app, the “Super-Bright Led Flashlight” (with more than 587million downloads), mostly replicates a feature that is pre-installed on all Android devices. Moreover, the five most downloaded Android apps (Facebook, Facebook Messenger, Whatsapp, Instagram and Skype) provide functionalities that are, to some extent at least, offered by apps that have, at some point or another, been preinstalled on many Android devices (notably Google Hangouts, Google Photos and Google+).
The Commission countered that communications apps were not appropriate counterexamples, because they benefit from network effects. But this overlooks the fact that the most successful communications and social media apps benefited from very limited network effects when they were launched, and that they succeeded despite the presence of competing pre-installed apps. Direct user downloads are thus a far more powerful vector of competition than the Commission cared to admit.
Similarly concerning is the Commission’s contention that paying OEMs or Mobile Network Operators (“MNOs”) to pre-install their search apps was not a viable alternative for Google’s rivals. Some of the reasons cited by the Commission to support this finding are particularly troubling.
For instance, the Commission claimed that high transaction costs prevented parties from concluding these pre installation deals.
But pre-installation agreements are common in the smartphone industry. In recent years, Microsoft struck a deal with Samsung to pre-install some of its office apps on the Galaxy Note 10. It also paid Verizon to pre-install the Bing search app on a number of Samsung phones, in 2010. Likewise, a number of Russian internet companies have been in talks with Huawei to pre-install their apps on its devices. And Yahoo reached an agreement with Mozilla to make it the default search engine for its web browser. Transaction costs do not appear to have been an obstacle in any of these cases.
The Commission also claimed that duplicating too many apps would cause storage space issues on devices.
And yet, a back-of-the-envelope calculation suggests that storage space is unlikely to be a major issue. For instance, the Bing Search app has a download size of 24MB, whereas typical entry-level smartphones generally have an internal memory of at least 64GB (that can often be extended to more than 1TB with the addition of an SD card). The Bing Search app thus takes up less than one-thousandth of these devices’ internal storage. Granted, the Yahoo search app is slightly larger than Microsoft’s, weighing almost 100MB. But this is still insignificant compared to a modern device’s storage space.
Finally, the Commission claimed that rivals were contractually prevented from concluding exclusive pre-installation deals because Google’s own apps would also be pre-installed on devices.
However, while it is true that Google’s apps would still be present on a device, rivals could still pay for their applications to be set as default. Even Yandex – a plaintiff – recognized that this would be a valuable solution. In its own words (taken from the Commission’s decision):
Pre-installation alongside Google would be of some benefit to an alternative general search provider such as Yandex […] given the importance of default status and pre-installation on home screen, a level playing field will not be established unless there is a meaningful competition for default status instead of Google.
In short, the Commission failed to convincingly establish that Google’s contractual terms prevented as-efficient rivals from effectively distributing their applications on Android smartphones. The evidence it adduced was simply too thin to support anything close to that conclusion.
2. The threat of fragmentation
The Commission’s second theory of harm concerned the so-called “antifragmentation” agreements concluded between Google and OEMs. In a nutshell, Google only agreed to license the Google Search and Google Play apps to OEMs that sold “Android Compatible” devices (i.e. devices sold with a version of Android did not stray too far from Google’s most recent version).
According to Google, this requirement was necessary to limit the number of Android forks that were present on the market (as well as older versions of the standard Android). This, in turn, reduced development costs and prevented the Android platform from unraveling.
The Commission disagreed, arguing that Google’s anti-fragmentation provisions thwarted competition from potential Android forks (i.e. modified versions of the Android OS).
This conclusion raises at least two critical questions: The first is whether these agreements were necessary to ensure the survival and competitiveness of the Android platform, and the second is why “open” platforms should be precluded from partly replicating a feature that is essential to rival “closed” platforms, such as Apple’s iOS.
Let us start with the necessity, or not, of Google’s contractual terms. If fragmentation did indeed pose an existential threat to the Android ecosystem, and anti-fragmentation agreements averted this threat, then it is hard to make a case that they thwarted competition. The Android platform would simply not have been as viable without them.
The Commission dismissed this possibility, relying largely on statements made by Google’s rivals (many of whom likely stood to benefit from the suppression of these agreements). For instance, the Commission cited comments that it received from Yandex – one of the plaintiffs in the case:
(1166) The fact that fragmentation can bring significant benefits is also confirmed by third-party respondents to requests for information:
(2) Yandex, which stated: “Whilst the development of Android forks certainly has an impact on the fragmentation of the Android ecosystem in terms of additional development being required to adapt applications for various versions of the OS, the benefits of fragmentation outweigh the downsides…”
Ironically, the Commission relied on Yandex’s statements while, at the same time, it dismissed arguments made by Android app developers, on account that they were conflicted. In its own words:
Google attached to its Response to the Statement of Objections 36 letters from OEMs and app developers supporting Google’s views about the dangers of fragmentation […] It appears likely that the authors of the 36 letters were influenced by Google when drafting or signing those letters.
More fundamentally, the Commission’s claim that fragmentation was not a significant threat is at odds with an almost unanimous agreement among industry insiders.
For example, while it is not dispositive, a rapid search for the terms “Google Android fragmentation”, using the DuckDuckGo search engine, leads to results that cut strongly against the Commission’s conclusions. Of the ten first results, only one could remotely be construed as claiming that fragmentation was not an issue. The others paint a very different picture (below are some of the most salient excerpts):
“There’s a fairly universal perception that Android fragmentation is a barrier to a consistent user experience, a security risk, and a challenge for app developers.” (here)
“Android fragmentation, a problem with the operating system from its inception, has only become more acute an issue over time, as more users clamor for the latest and greatest software to arrive on their phones.” (here)
“Android Fragmentation a Huge Problem: Study.” (here)
“Google’s Android fragmentation fix still isn’t working at all.” (here)
“Does Google care about Android fragmentation? Not now—but it should.” (here).
“This is very frustrating to users and a major headache for Google… and a challenge for corporate IT,” Gold said, explaining that there are a large number of older, not fully compatible devices running various versions of Android.” (here)
Perhaps more importantly, one might question why Google should be treated differently than rivals that operate closed platforms, such as Apple, Microsoft and Blackberry (before the last two mostly exited the Mobile OS market). By definition, these platforms limit all potential forks (because they are based on proprietary software).
The Commission argued that Apple, Microsoft and Blackberry had opted to run “closed” platforms, which gave them the right to prevent rivals from copying their software.
While this answer has some superficial appeal, it is incomplete. Android may be an open source project, but this is not true of Google’s proprietary apps. Why should it be forced to offer them to rivals who would use them to undermine its platform? The Commission did not meaningfully consider this question.
And yet, industry insiders routinely compare the fragmentation of Apple’s iOS and Google’s Android OS, in order to gage the state of competition between both firms. For instance, one commentator noted:
[T]he gap between iOS and Android users running the latest major versions of their operating systems has never looked worse for Google.
Likewise, an article published in Forbes concluded that Google’s OEMs were slow at providing users with updates, and that this might drive users and developers away from the Android platform:
For many users the Android experience isn’t as up-to-date as Apple’s iOS. Users could buy the latest Android phone now and they may see one major OS update and nothing else. […] Apple users can be pretty sure that they’ll get at least two years of updates, although the company never states how long it intends to support devices.
However this problem, in general, makes it harder for developers and will almost certainly have some inherent security problems. Developers, for example, will need to keep pushing updates – particularly for security issues – to many different versions. This is likely a time-consuming and expensive process.
To recap, the Commission’s decision paints a world that is either black or white: either firms operate closed platforms, and they are then free to limit fragmentation as they see fit, or they create open platforms, in which case they are deemed to have accepted much higher levels of fragmentation.
This stands in stark contrast to industry coverage, which suggests that users and developers of both closed and open platforms care a great deal about fragmentation, and demand that measures be put in place to address it. If this is true, then the relative fragmentation of open and closed platforms has an important impact on their competitive performance, and the Commission was wrong to reject comparisons between Google and its closed ecosystem rivals.
3. Google’s revenue sharing agreements
The last part of the Commission’s case centered on revenue sharing agreements between Google and its OEMs/MNOs. Google paid these parties to exclusively place its search app on the homescreen of their devices. According to the Commission, these payments reduced OEMs and MNOs’ incentives to pre-install competing general search apps.
However, to reach this conclusion, the Commission had to make the critical (and highly dubious) assumption that rivals could not match Google’s payments.
To get to that point, it notably assumed that rival search engines would be unable to increase their share of mobile search results beyond their share of desktop search results. The underlying intuition appears to be that users who freely chose Google Search on desktop (Google Search & Chrome are not set as default on desktop PCs) could not be convinced to opt for a rival search engine on mobile.
But this ignores the possibility that rivals might offer an innovative app that swayed users away from their preferred desktop search engine.
More importantly, this reasoning cuts against the Commission’s own claim that pre-installation and default placement were critical. If most users, dismiss their device’s default search app and search engine in favor of their preferred ones, then pre-installation and default placement are largely immaterial, and Google’s revenue sharing agreements could not possibly have thwarted competition (because they did not prevent users from independently installing their preferred search app). On the other hand, if users are easily swayed by default placement, then there is no reason to believe that rivals could not exceed their desktop market share on mobile phones.
The Commission was also wrong when it claimed that rival search engines were at a disadvantage because of the structure of Google’s revenue sharing payments. OEMs and MNOs allegedly lost all of their payments from Google if they exclusively placed a rival’s search app on the home screen of a single line of handsets.
The key question is the following: could Google automatically tilt the scales to its advantage by structuring the revenue sharing payments in this way? The answer appears to be no.
For instance, it has been argued that exclusivity may intensify competition for distribution. Conversely, other scholars have claimed that exclusivity may deter entry in network industries. Unfortunately, the Commission did not examine whether Google’s revenue sharing agreements fell within this category.
It thus provided insufficient evidence to support its conclusion that the revenue sharing agreements reduced OEMs’ (and MNOs’) incentives to pre-install competing general search apps, rather than merely increasing competition “for the market”.
To summarize, the Commission overestimated the effect that Google’s behavior might have on its rivals. It almost entirely ignored the justifications that Google put forward and relied heavily on statements made by its rivals. The result is a one-sided decision that puts undue strain on the Android Business model, while providing few, if any, benefits in return.
This is the second in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here). It draws on research from a soon-to-be published ICLE white paper.
This improper market definition might not be so problematic if the Commission had then proceeded to undertake a detailed (and balanced) assessment of the competitive conditions that existed in the markets where Google operates (including the competitive constraints imposed by Apple).
Unfortunately, this was not the case. The following paragraphs respond to some of the Commission’s most problematic arguments regarding the existence of barriers to entry, and the absence of competitive constraints on Google’s behavior.
The overarching theme is that the Commission failed to quantify its findings and repeatedly drew conclusions that did not follow from the facts cited. As a result, it was wrong to conclude that Google faced little competitive pressure from Apple and other rivals.
1. Significant investments and network effects ≠ barriers to entry
In its decision, the Commission notably argued that significant investments (millions of euros) are required to set up a mobile OS and App store. It also argued that market for licensable mobile operating systems gave rise to network effects.
But contrary to the Commission’s claims, neither of these two factors is, in and of itself, sufficient to establish the existence of barriers to entry (even under EU competition law’s loose definition of the term, rather than Stigler’s more technical definition)
Take the argument that significant investments are required to enter the mobile OS market.
The main problem is that virtually every market requires significant investments on the part of firms that seek to enter. Not all of these costs can be seen as barriers to entry, or the concept would lose all practical relevance.
For example, purchasing a Boeing 737 Max airplane reportedly costs at least $74 million. Does this mean that incumbents in the airline industry are necessarily shielded from competition? Of course not.
Instead, the relevant question is whether an entrant with a superior business model could access the capital required to purchase an airplane and challenge the industry’s incumbents.
Returning to the market for mobile OSs, the Commission should thus have questioned whether as-efficient rivals could find the funds required to produce a mobile OS. If the answer was yes, then the investments highlighted by the Commission were largely immaterial. As it happens, several firms have indeed produced competing OSs, including CyanogenMod, LineageOS and Tizen.
The same is true of Commission’s conclusion that network effects shielded Google from competitors. While network effects almost certainly play some role in the mobile OS and app store markets, it does not follow that they act as barriers to entry in competition law terms.
As Paul Belleflamme recently argued, it is a myth that network effects can never be overcome. And as I have written elsewhere, the most important question is whether users could effectively coordinate their behavior and switch towards a superior platform, if one arose (See also Dan Spulber’s excellent article on this point).
The Commission completely ignored this critical interrogation during its discussion of network effects.
2. The failure of competitors is not proof of barriers to entry
Just as problematically, the Commission wrongly concluded that the failure of previous attempts to enter the market was proof of barriers to entry.
This is the epitome of the Black Swan fallacy (i.e. inferring that all swans are white because you have never seen a relatively rare, but not irrelevant, black swan).
The failure of rivals is equally consistent with any number of propositions:
There were indeed barriers to entry;
Google’s products were extremely good (in ways that rivals and the Commission failed to grasp);
Google responded to intense competitive pressure by continuously improving its product (and rivals thus chose to stay out of the market);
The Commission did not demonstrate that its own inference was the right one, nor did it even demonstrate any awareness that other explanations were at least equally plausible.
3. First mover advantage?
Much of the same can be said about the Commission’s observation that Google enjoyed a first mover advantage.
The elephant in the room is that Google was not the first mover in the smartphone market (and even less so in the mobile phone industry). The Commission attempted to sidestep this uncomfortable truth by arguing that Google was the first mover in the Android app store market. It then concluded that Google had an advantage because users were familiar with Android’s app store.
To call this reasoning “naive” would be too kind. Maybe consumers are familiar with Google’s products today, but they certainly weren’t when Google entered the market.
Why would something that did not hinder Google (i.e. users’ lack of familiarity with its products, as opposed to those of incumbents such as Nokia or Blackberry) have the opposite effect on its future rivals?
Moreover, even if rivals had to replicate Android’s user experience (and that of its app store) to prove successful, the Commission did not show that there was anything that prevented them from doing so — a particularly glaring omission given the open-source nature of the Android OS.
The result is that, at best, the Commission identified a correlation but not causality. Google may arguably have been the first, and users might have been more familiar with its offerings, but this still does not prove that Android flourished (and rivals failed) because of this.
4. It does not matter that users “do not take the OS into account” when they purchase a device
The Commission also concluded that alternatives to Android (notably Apple’s iOS and App Store) exercised insufficient competitive constraints on Google. Among other things, it argued that this was because users do not take the OS into account when they purchase a smartphone (so Google could allegedly degrade Android without fear of losing users to Apple)..
In doing so, the Commission failed to grasp that buyers might base their purchases on a devices’ OS without knowing it.
Some consumers will simply follow the advice of a friend, family member or buyer’s guide. Acutely aware of their own shortcomings, they thus rely on someone else who does take the phone’s OS into account.
But even when they are acting independently, unsavvy consumers may still be driven by technical considerations. They might rely on a brand’s reputation for providing cutting edge devices (which, per the Commission, is the most important driver of purchase decisions), or on a device’s “feel” when they try it in a showroom. In both cases, consumers’ choices could indirectly be influenced by a phone’s OS.
In more technical terms, a phone’s hardware and software are complementary goods. In these settings, it is extremely difficult to attribute overall improvements to just one of the two complements. For instance, a powerful OS and chipset are both equally necessary to deliver a responsive phone. The fact that consumers may misattribute a device’s performance to one of these two complements says nothing about their underlying contribution to a strong end-product (which, in turn, drives purchase decisions). Likewise, battery life is reportedly one of the most important features for users, yet few realize that a phone’s OS has a large impact on it.
Finally, if consumers were really indifferent to the phone’s operating system, then the Commission should have dropped at least part of its case against Google. The Commission’s claim that Google’s anti-fragmentation agreements harmed consumers (by reducing OS competition) has no purchase if Android is provided free of charge and consumers are indifferent to non-price parameters, such as the quality of a phone’s OS.
5. Google’s users were not “captured”
Finally, the Commission claimed that consumers are loyal to their smartphone brand and that competition for first time buyers was insufficient to constrain Google’s behavior against its “captured” installed base.
It notably found that 82% of Android users stick with Android when they change phones (compared to 78% for Apple), and that 75% of new smartphones are sold to existing users.
The Commission asserted, without further evidence, that these numbers proved there was little competition between Android and iOS.
But is this really so? In almost all markets consumers likely exhibit at least some loyalty to their preferred brand. At what point does this become an obstacle to interbrand competition? The Commission offered no benchmark mark against which to assess its claims.
And although inter-industry comparisons of churn rates should be taken with a pinch of salt, it is worth noting that the Commission’s implied 18% churn rate for Android is nothing out of the ordinary (see, e.g., here, here, and here), including for industries that could not remotely be called anticompetitive.
To make matters worse, the Commission’s own claimed figures suggest that a large share of sales remained contestable (roughly 39%).
Imagine that, every year, 100 devices are sold in Europe (75 to existing users and 25 to new users, according to the Commission’s figures). Imagine further that the installed base of users is split 76–24 in favor of Android. Under the figures cited by the Commission, it follows that at least 39% of these sales are contestable.
According to the Commission’s figures, there would be 57 existing Android users (76% of 75) and 18 Apple users (24% of 75), of which roughly 10 (18%) and 4 (22%), respectively, switch brands in any given year. There would also be 25 new users who, even according to the Commission, do not display brand loyalty. The result is that out of 100 purchasers, 25 show no brand loyalty and 14 switch brands. And even this completely ignores the number of consumers who consider switching but choose not to after assessing the competitive options.
In short, the preceding paragraphs argue that the Commission did not meet the requisite burden of proof to establish Google’s dominance. Of course, it is one thing to show that the Commission’s reasoning was unsound (it is) and another to establish that its overall conclusion was wrong.
At the very least, I hope these paragraphs will convey a sense that the Commission loaded the dice, so to speak. Throughout the first half of its lengthy decision, it interpreted every piece of evidence against Google, drew significant inferences from benign pieces of information, and often resorted to circular reasoning.
The following post in this blog series argues that these errors also permeate the Commission’s analysis of Google’s allegedly anticompetitive behavior.
Yesterday was President Trump’s big “Social Media Summit” where he got together with a number of right-wing firebrands to decry the power of Big Tech to censor conservatives online. According to the Wall Street Journal:
Mr. Trump attacked social-media companies he says are trying to silence individuals and groups with right-leaning views, without presenting specific evidence. He said he was directing his administration to “explore all legislative and regulatory solutions to protect free speech and the free speech of all Americans.”
“Big Tech must not censor the voices of the American people,” Mr. Trump told a crowd of more than 100 allies who cheered him on. “This new technology is so important and it has to be used fairly.”
Despite the simplistic narrative tying President Trump’s vision of the world to conservatism, there is nothing conservative about his views on the First Amendment and how it applies to social media companies.
I have noted in severalplaces before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.
With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).
Contrary to the original meaning of the First Amendment and the weight of Supreme Court precedent, President Trump’s view of the First Amendment is that it protects a positive conception of liberty — one under which the government, in order to facilitate its conception of “free speech,” has the right and even the duty to impose restrictions on how private actors regulate speech on their property (in this case, social media companies).
But if Trump’s view were adopted, discretion as to what is necessary to facilitate free speech would be left to future presidents and congresses, undermining the bedrock conservative principle of the Constitution as a shield against government regulation, all falsely in the name of protecting speech. This is counter to the general approach of modern conservatism (but not, of course, necessarily Republicanism) in the United States, including that of many of President Trump’s own judicial and agency appointees. Indeed, it is actually more consistent with the views of modern progressives — especially within the FCC.
For instance, the current conservative bloc on the Supreme Court (over the dissent of the four liberal Justices) recently reaffirmed the view that the First Amendment applies only to state action in Manhattan Community Access Corp. v. Halleck. The opinion, written by Trump-appointee, Justice Brett Kavanaugh, states plainly that:
Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).
Former Stanford Law dean and First Amendment scholar, Kathleen Sullivan, has summed up the very different approaches to free speech pursued by conservatives and progressives (insofar as they are represented by the “conservative” and “liberal” blocs on the Supreme Court):
In the first vision…, free speech rights serve an overarching interest in political equality. Free speech as equality embraces first an antidiscrimination principle: in upholding the speech rights of anarchists, syndicalists, communists, civil rights marchers, Maoist flag burners, and other marginal, dissident, or unorthodox speakers, the Court protects members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference…. By invalidating conditions on speakers’ use of public land, facilities, and funds, a long line of speech cases in the free-speech-as-equality tradition ensures public subvention of speech expressing “the poorly financed causes of little people.” On the equality-based view of free speech, it follows that the well-financed causes of big people (or big corporations) do not merit special judicial protection from political regulation. And because, in this view, the value of equality is prior to the value of speech, politically disadvantaged speech prevails over regulation but regulation promoting political equality prevails over speech.
The second vision of free speech, by contrast, sees free speech as serving the interest of political liberty. On this view…, the First Amendment is a negative check on government tyranny, and treats with skepticism all government efforts at speech suppression that might skew the private ordering of ideas. And on this view, members of the public are trusted to make their own individual evaluations of speech, and government is forbidden to intervene for paternalistic or redistributive reasons. Government intervention might be warranted to correct certain allocative inefficiencies in the way that speech transactions take place, but otherwise, ideas are best left to a freely competitive ideological market.
The outcome of Citizens United is best explained as representing a triumph of the libertarian over the egalitarian vision of free speech. Justice Kennedy’s opinion for the Court, joined by Chief Justice Roberts and Justices Scalia, Thomas, and Alito, articulates a robust vision of free speech as serving political liberty; the dissenting opinion by Justice Stevens, joined by Justices Ginsburg, Breyer, and Sotomayor, sets forth in depth the countervailing egalitarian view. (Emphasis added).
President Trump’s views on the regulation of private speech are alarmingly consistent with those embraced by the Court’s progressives to “protect members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference” — exactly the sort of conservative “victimhood” that Trump and his online supporters have somehow concocted to describe themselves.
The First Amendment does more than protect the interests of corporations. As courts have long recognized, it is a force to support individual interest in self-expression and the right of the public to receive information and ideas. As Justice Black so eloquently put it, “the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Our leased access rules provide opportunity for civic participation. They enhance the marketplace of ideas by increasing the number of speakers and the variety of viewpoints. They help preserve the possibility of a diverse, pluralistic medium—just as Congress called for the Cable Communications Policy Act… The proper inquiry then, is not simply whether corporations providing channel capacity have First Amendment rights, but whether this law abridges expression that the First Amendment was meant to protect. Here, our leased access rules are not content-based and their purpose and effect is to promote free speech. Moreover, they accomplish this in a narrowly-tailored way that does not substantially burden more speech than is necessary to further important interests. In other words, they are not at odds with the First Amendment, but instead help effectuate its purpose for all of us. (Emphasis added).
Consistent with the progressive approach, this leaves discretion in the hands of “experts” (like Rosenworcel) to determine what needs to be done in order to protect the underlying value of free speech in the First Amendment through government regulation, even if it means compelling speech upon private actors.
Trump’s view of what the First Amendment’s free speech protections entail when it comes to social media companies is inconsistent with the conception of the Constitution-as-guarantor-of-negative-liberty that conservatives have long embraced.
Principle #2: Any new intermediary liability law must not target constitutionally protected speech.
The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.
Principle #4: Section 230 does not, and should not, require “neutrality.”
Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.
The idea that social media should be subject to a nondiscrimination requirement — for which President Trump and others like Senator Josh Hawley have been arguing lately — is flatly contrary to Section 230 — as well as to the First Amendment.
Conservatives upset about “social media discrimination” need to think hard about whether they really want to adopt this sort of position out of convenience, when the tradition with which they align rejects it — rightly — in nearly all other venues. Even if you believe that Facebook, Google, and Twitter are trying to make it harder for conservative voices to be heard (despite all evidence to the contrary), it is imprudent to reject constitutional first principles for a temporary policy victory. In fact, there’s nothing at all “conservative” about an abdication of the traditional principle linking freedom to property for the sake of political expediency.