Archives For search

Last week the editorial board of the Washington Post penned an excellent editorial responding to the European Commission’s announcement of its decision in its Google Shopping investigation. Here’s the key language from the editorial:

Whether the demise of any of [the complaining comparison shopping sites] is specifically traceable to Google, however, is not so clear. Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies. Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites…. Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

That’s actually a pretty thorough, if succinct, summary of the basic problems with the Commission’s case (based on its PR and Factsheet, at least; it hasn’t released the full decision yet).

I’ll have more to say on the decision in due course, but for now I want to elaborate on two of the points raised by the WaPo editorial board, both in service of its crucial rejoinder to the Commission that “Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies.”

First, the WaPo editorial board points out that:

Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites.

It is undoubtedly true that users “may well prefer to see a Google-generated list of vendors first.” It’s also crucial to understanding the changes in Google’s search results page that have given rise to the current raft of complaints.

As I noted in a Wall Street Journal op-ed two years ago:

It’s a mistake to consider “general search” and “comparison shopping” or “product search” to be distinct markets.

From the moment it was technologically feasible to do so, Google has been adapting its traditional search results—that familiar but long since vanished page of 10 blue links—to offer more specialized answers to users’ queries. Product search, which is what is at issue in the EU complaint, is the next iteration in this trend.

Internet users today seek information from myriad sources: Informational sites (Wikipedia and the Internet Movie Database); review sites (Yelp and TripAdvisor); retail sites (Amazon and eBay); and social-media sites (Facebook and Twitter). What do these sites have in common? They prioritize certain types of data over others to improve the relevance of the information they provide.

“Prioritization” of Google’s own shopping results, however, is the core problem for the Commission:

Google has systematically given prominent placement to its own comparison shopping service: when a consumer enters a query into the Google search engine in relation to which Google’s comparison shopping service wants to show results, these are displayed at or near the top of the search results. (Emphasis in original).

But this sort of prioritization is the norm for all search, social media, e-commerce and similar platforms. And this shouldn’t be a surprise: The value of these platforms to the user is dependent upon their ability to sort the wheat from the chaff of the now immense amount of information coursing about the Web.

As my colleagues and I noted in a paper responding to a methodologically questionable report by Tim Wu and Yelp leveling analogous “search bias” charges in the context of local search results:

Google is a vertically integrated company that offers general search, but also a host of other products…. With its well-developed algorithm and wide range of products, it is hardly surprising that Google can provide not only direct answers to factual questions, but also a wide range of its own products and services that meet users’ needs. If consumers choose Google not randomly, but precisely because they seek to take advantage of the direct answers and other options that Google can provide, then removing the sort of “bias” alleged by [complainants] would affirmatively hurt, not help, these users. (Emphasis added).

And as Josh Wright noted in an earlier paper responding to yet another set of such “search bias” charges (in that case leveled in a similarly methodologically questionable report by Benjamin Edelman and Benjamin Lockwood):

[I]t is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather individual competitors and websites. Edelman & Lockwood´s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content. However, it is not useful from an antitrust policy perspective because it erroneously—and contrary to economic theory and evidence—presumes natural and procompetitive product differentiation in search rankings to be inherently harmful. (Emphasis added).

We’ll have to see what kind of analysis the Commission relies upon in its decision to reach its conclusion that prioritization is an antitrust problem, but there is reason to be skeptical that it will turn out to be compelling. The Commission states in its PR that:

The evidence shows that consumers click far more often on results that are more visible, i.e. the results appearing higher up in Google’s search results. Even on a desktop, the ten highest-ranking generic search results on page 1 together generally receive approximately 95% of all clicks on generic search results (with the top result receiving about 35% of all the clicks). The first result on page 2 of Google’s generic search results receives only about 1% of all clicks. This cannot just be explained by the fact that the first result is more relevant, because evidence also shows that moving the first result to the third rank leads to a reduction in the number of clicks by about 50%. The effects on mobile devices are even more pronounced given the much smaller screen size.

This means that by giving prominent placement only to its own comparison shopping service and by demoting competitors, Google has given its own comparison shopping service a significant advantage compared to rivals. (Emphasis added).

Whatever truth there is in the characterization that placement is more important than relevance in influencing user behavior, the evidence cited by the Commission to demonstrate that doesn’t seem applicable to what’s happening on Google’s search results page now.

Most crucially, the evidence offered by the Commission refers only to how placement affects clicks on “generic search results” and glosses over the fact that the “prominent placement” of Google’s “results” is not only a difference in position but also in the type of result offered.

Google Shopping results (like many of its other “vertical results” and direct answers) are very different than the 10 blue links of old. These “universal search” results are, for one thing, actual answers rather than merely links to other sites. They are also more visually rich and attractively and clearly displayed.

Ironically, Tim Wu and Yelp use the claim that users click less often on Google’s universal search results to support their contention that increased relevance doesn’t explain Google’s prioritization of its own content. Yet, as we note in our response to their study:

[I]f a consumer is using a search engine in order to find a direct answer to a query rather than a link to another site to answer it, click-through would actually represent a decrease in consumer welfare, not an increase.

In fact, the study fails to incorporate this dynamic even though it is precisely what the authors claim the study is measuring.

Further, as the WaPo editorial intimates, these universal search results (including Google Shopping results) are quite plausibly more valuable to users. As even Tim Wu and Yelp note:

No one truly disagrees that universal search, in concept, can be an important innovation that can serve consumers.

Google sees it exactly this way, of course. Here’s Tim Wu and Yelp again:

According to Google, a principal difference between the earlier cases and its current conduct is that universal search represents a pro-competitive, user-serving innovation. By deploying universal search, Google argues, it has made search better. As Eric Schmidt argues, “if we know the answer it is better for us to answer that question so [the user] doesn’t have to click anywhere, and in that sense we… use data sources that are our own because we can’t engineer it any other way.”

Of course, in this case, one would expect fewer clicks to correlate with higher value to users — precisely the opposite of the claim made by Tim Wu and Yelp, which is the surest sign that their study is faulty.

But the Commission, at least according to the evidence cited in its PR, doesn’t even seem to measure the relative value of the very different presentations of information at all, instead resting on assertions rooted in the irrelevant difference in user propensity to click on generic (10 blue links) search results depending on placement.

Add to this Pinar Akman’s important point that Google Shopping “results” aren’t necessarily search results at all, but paid advertising:

[O]nce one appreciates the fact that Google’s shopping results are simply ads for products and Google treats all ads with the same ad-relevant algorithm and all organic results with the same organic-relevant algorithm, the Commission’s order becomes impossible to comprehend. Is the Commission imposing on Google a duty to treat non-sponsored results in the same way that it treats sponsored results? If so, does this not provide an unfair advantage to comparison shopping sites over, for example, Google’s advertising partners as well as over Amazon, eBay, various retailers, etc…?

Randy Picker also picks up on this point:

But those Google shopping boxes are ads, Picker told me. “I can’t imagine what they’re thinking,” he said. “Google is in the advertising business. That’s how it makes its money. It has no obligation to put other people’s ads on its website.”

The bottom line here is that the WaPo editorial board does a better job characterizing the actual, relevant market dynamics in a single sentence than the Commission seems to have done in its lengthy releases summarizing its decision following seven full years of investigation.

The second point made by the WaPo editorial board to which I want to draw attention is equally important:

Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

The Commission dismisses this argument in its Factsheet:

The Commission Decision concerns the effect of Google’s practices on comparison shopping markets. These offer a different service to merchant platforms, such as Amazon and eBay. Comparison shopping services offer a tool for consumers to compare products and prices online and find deals from online retailers of all types. By contrast, they do not offer the possibility for products to be bought on their site, which is precisely the aim of merchant platforms. Google’s own commercial behaviour reflects these differences – merchant platforms are eligible to appear in Google Shopping whereas rival comparison shopping services are not.

But the reality is that “comparison shopping,” just like “general search,” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google (or Foundem, or Amazon, or Facebook…) happens to use doesn’t reflect the extent of substitutability between these different mechanisms.

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive. The same goes for comparison shopping.

And the fact that Amazon and eBay “offer the possibility for products to be bought on their site” doesn’t take away from the fact that they also “offer a tool for consumers to compare products and prices online and find deals from online retailers of all types.” Not only do these sites contain enormous amounts of valuable (and well-presented) information about products, including product comparisons and consumer reviews, but they also actually offer comparisons among retailers. In fact, Fifty percent of the items sold through Amazon’s platform, for example, are sold by third-party retailers — the same sort of retailers that might also show up on a comparison shopping site.

More importantly, though, as the WaPo editorial rightly notes, “[t]hose who aren’t happy anyway have other options.” Google just isn’t the indispensable gateway to the Internet (and definitely not to shopping on the Internet) that the Commission seems to think.

Today over half of product searches in the US start on Amazon. The majority of web page referrals come from Facebook. Yelp’s most engaged users now access it via its app (which has seen more than 3x growth in the past five years). And a staggering 40 percent of mobile browsing on both Android and iOS now takes place inside the Facebook app.

Then there are “closed” platforms like the iTunes store and innumerable other apps that handle copious search traffic (including shopping-related traffic) but also don’t figure in the Commission’s analysis, apparently.

In fact, billions of users reach millions of companies every day through direct browser navigation, social media, apps, email links, review sites, blogs, and countless other means — all without once touching So-called “dark social” interactions (email, text messages, and IMs) drive huge amounts of some of the most valuable traffic on the Internet, in fact.

All of this, in turn, has led to a competitive scramble to roll out completely new technologies to meet consumers’ informational (and merchants’ advertising) needs. The already-arriving swarm of VR, chatbots, digital assistants, smart-home devices, and more will offer even more interfaces besides Google through which consumers can reach their favorite online destinations.

The point is this: Google’s competitors complaining that the world is evolving around them don’t need to rely on Google. That they may choose to do so does not saddle Google with an obligation to ensure that they can always do so.

Antitrust laws — in Europe, no less than in the US — don’t require Google or any other firm to make life easier for competitors. That’s especially true when doing so would come at the cost of consumer-welfare-enhancing innovations. The Commission doesn’t seem to have grasped this fundamental point, however.

The WaPo editorial board gets it, though:

The immense size and power of all Internet giants are a legitimate focus for the antitrust authorities on both sides of the Atlantic. Brussels vs. Google, however, seems to be a case of punishment without crime.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.  

As the Google antitrust discussion heats up on its way toward some culmination at the FTC, I thought it would be helpful to address some of the major issues raised in the case by taking a look at what’s going on in the market(s) in which Google operates. To this end, I have penned a lengthy document — The Market Realities that Undermine the Antitrust Case Against Google — highlighting some of the most salient aspects of current market conditions and explaining how they fit into the putative antitrust case against Google.

While not dispositive, these “realities on the ground” do strongly challenge the logic and thus the relevance of many of the claims put forth by Google’s critics. The case against Google rests on certain assumptions about how the markets in which it operates function. But these are tech markets, constantly evolving and complex; most assumptions (and even “conclusions” based on data) are imperfect at best. In this case, the conventional wisdom with respect to Google’s alleged exclusionary conduct, the market in which it operates (and allegedly monopolizes), and the claimed market characteristics that operate to protect its position (among other things) should be questioned.

The reality is far more complex, and, properly understood, paints a picture that undermines the basic, essential elements of an antitrust case against the company.

The document first assesses the implications for Market Definition and Monopoly Power of these competitive realities. Of note:

  • Users use Google because they are looking for information — but there are lots of ways to do that, and “search” is not so distinct that a “search market” instead of, say, an “online information market” (or something similar) makes sense.
  • Google competes in the market for targeted eyeballs: a market aimed to offer up targeted ads to interested users. Search is important in this, but it is by no means alone, and there are myriad (and growing) other mechanisms to access consumers online.
  • To define the relevant market in terms of the particular mechanism that prevails to accomplish the matching of consumers and advertisers does not reflect the substitutability of other mechanisms that do the same thing but simply aren’t called “search.”
  • In a world where what prevails today won’t — not “might not,” but won’t — prevail tomorrow, it is the height of folly (and a serious threat to innovation and consumer welfare) to constrain the activities of firms competing in such an environment by pigeonholing the market.
  • In other words, in a proper market, Google looks significantly less dominant. More important, perhaps, as search itself evolves, and as Facebook, Amazon and others get into the search advertising game, Google’s strong position even in the overly narrow “search” market looks far from unassailable.

Next I address Anticompetitive Harm — how the legal standard for antitrust harm is undermined by a proper understanding of market conditions:

  • Antitrust law doesn’t require that Google or any other large firm make life easier for competitors or others seeking to access resources owned by these firms.
  • Advertisers are increasingly targeting not paid search but rather social media to reach their target audiences.
  • But even for those firms that get much or most of their traffic from “organic” search, this fact isn’t an inevitable relic of a natural condition over which only the alleged monopolist has control; it’s a business decision, and neither sensible policy nor antitrust law is set up to protect the failed or faulty competitor from himself.
  • Although it often goes unremarked, paid search’s biggest competitor is almost certainly organic search (and vice versa). Nextag may complain about spending money on paid ads when it prefers organic, but the real lesson here is that the two are substitutes — along with social sites and good old-fashioned email, too.
  • It is incumbent upon critics to accurately assess the “but for” world without the access point in question. Here, Nextag can and does use paid ads to reach its audience (and, it is important to note, did so even before it claims it was foreclosed from Google’s users). But there are innumerable other avenues of access, as well. Some may be “better” than others; some that may be “better” now won’t be next year (think how links by friends on Facebook to price comparisons on Nextag pages could come to dominate its readership).
  • This is progress — creative destruction — not regress, and such changes should not be penalized.

Next I take on the perennial issue of Error Costs and the Risks of Erroneous Enforcement arising from an incomplete and inaccurate understanding of Google’s market:

  • Microsoft’s market position was unassailable . . . until it wasn’t — and even at the time, many could have told you that its perceived dominance was fleeting (and many did).
  • Apple’s success (and the consumer value it has created), while built in no small part on its direct competition with Microsoft and the desktop PCs which run it, was primarily built on a business model that deviated from its once-dominant rival’s — and not on a business model that the DOJ’s antitrust case against the company either facilitated or anticipated.
  • Microsoft and Google’s other critic-competitors have more avenues to access users than ever before. Who cares if users get to these Google-alternatives through their devices instead of a URL? Access is access.
  • It isn’t just monopolists who prefer not to innovate: their competitors do, too. To the extent that Nextag’s difficulties arise from Google innovating, it is Nextag, not Google, that’s working to thwart innovation and fighting against dynamism.
  • Recall the furor around Google’s purchase of ITA, a powerful cautionary tale. As of September 2012, Google ranks 7th in visits among metasearch travel sites, with a paltry 1.4% of such visits. Residing at number one? FairSearch founding member, Kayak, with a whopping 61%. And how about FairSearch member Expedia? Currently, it’s the largest travel company in the world, and it has only grown in recent years.

The next section addresses the essential issue of Barriers to Entry and their absence:

  • One common refrain from Google’s critics is that Google’s access to immense amounts of data used to increase the quality of its targeting presents a barrier to competition that no one else can match, thus protecting Google’s unassailable monopoly. But scale comes in lots of ways.
  • It’s never been the case that a firm has to generate its own inputs into every product it produces — and there is no reason to suggest search/advertising is any different.
  • Meanwhile, Google’s chief competitor, Microsoft, is hardly hurting for data (even, quite creatively, culling data directly from Google itself), despite its claims to the contrary. And while regulators and critics may be looking narrowly and statically at search data, Microsoft is meanwhile sitting on top of copious data from unorthodox — and possibly even more valuable — sources.
  • To defend a claim of monopolization, it is generally required to show that the alleged monopolist enjoys protection from competition through barriers to entry. In Google’s case, the barriers alleged are illusory.

The next section takes on recent claims revolving around The Mobile Market and Google’s position (and conduct) there:

  • If obtaining or preserving dominance is simply a function of cash, Microsoft is sitting on some $58 billion of it that it can devote to that end. And JP Morgan Chase would be happy to help out if it could be guaranteed monopoly returns just by throwing its money at Bing. Like data, capital is widely available, and, also like data, it doesn’t matter if a company gets it from selling search advertising or from selling cars.
  • Advertisers don’t care whether the right (targeted) user sees their ads while playing Angry Birds or while surfing the web on their phone, and users can (and do) seek information online (and thus reveal their preferences) just as well (or perhaps better) through Wikipedia’s app as via a Google search in a mobile browser.
  • Moreover, mobile is already (and increasingly) a substitute for the desktop. Distinguishing mobile search from desktop search is meaningless when users use their tablets at home, perform activities that they would have performed at home away from home on mobile devices simply because they can, and where users sometimes search for places to go (for example) on mobile devices while out and sometimes on their computers before they leave.
  • Whatever gains Google may have made in search from its spread into the mobile world is likely to be undermined by the massive growth in social connectivity it has also wrought.
  • Mobile is part of the competitive landscape. All of the innovations in mobile present opportunities for Google and its competitors to best each other, and all present avenues of access for Google and its competitors to reach consumers.

The final section Concludes.

The lessons from all of this? There are two. First, these are dynamic markets, and it is a fool’s errand to identify the power or significance of any player in these markets based on data available today — data that is already out of date between the time it is collected and the time it is analyzed.

Second, each of these developments has presented different, novel and shifting opportunities and challenges for firms interested in attracting eyeballs, selling ad space and data, earning revenue and obtaining market share. To say that Google dominates “search” or “online advertising” misses the mark precisely because there is simply nothing especially antitrust-relevant about either search or online advertising. Because of their own unique products, innovations, data sources, business models, entrepreneurship and organizations, all of these companies have challenged and will continue to challenge the dominant company — and the dominant paradigm — in a shifting and evolving range of markets.

Perhaps most important is this:

Competition with Google may not and need not look exactly like Google itself, and some of this competition will usher in innovations that Google itself won’t be able to replicate. But this doesn’t make it any less competitive.  

Competition need not look identical to be competitive — that’s what innovation is all about. Just ask those famous buggy whip manufacturers.

I will be speaking at a lunch debate in DC hosted by TechFreedom on Friday, September 28, 2012, to discuss the FTC’s antitrust investigation of Google. Details below.

TechFreedom will host a livestreamed, parliamentary-style lunch debate on Friday September 28, 2012, to discuss the FTC’s antitrust investigation of Google.   As the company has evolved, expanding outward from its core search engine product, it has come into competition with a range of other firms and established business models. This has, in turn, caused antitrust regulators to investigate Google’s conduct, essentially questioning whether the company’s success obligates it to treat competitors neutrally. James Cooper, Director of Research and Policy for the Law and Economics Center at George Mason University School of Law, will moderate a panel of four distinguished commenters to discuss the question, “Should the FTC Sue Google Over Search?”  

Arguing “Yes” will be:

Arguing “No” will be:

Friday, September 28, 2012
12:00 p.m. – 2:00 p.m.

The Monocle Restaurant
107 D Street Northeast
Washington, DC 20002

RSVP here. The event will be livestreamed here and you can follow the conversation on Twitter at #GoogleFTC.

For those viewing by livestream, we will watch for questions posted to Twitter at the #GoogleFTC hashtag and endeavor, as possible, to incorporate them into the debate.


Six months may not seem a great deal of time in the general business world, but in the Internet space it’s a lifetime as new websites, tools and features are introduced every day that change where and how users get and share information. The rise of Facebook is a great example: the social networking platform that didn’t exist in early 2004 filed paperwork last month to launch what is expected to be one of the largest IPOs in history. To put it in perspective, Ford Motor went public nearly forty years after it was founded.

This incredible pace of innovation is seen throughout the Internet, and since Google’s public disclosure of its Federal Trade Commission antitrust investigation just this past June, there have been many dynamic changes to the landscape of the Internet Search market. And as the needs and expectations of consumers continue to evolve, Internet search must adapt – and quickly – to shifting demand.

One noteworthy development was the release of Siri by Apple, which was introduced to the world in late 2011 on the most recent iPhone. Today, many consider it the best voice recognition application in history, but its potential really lies in its ability revolutionize the way we search the Internet, answer questions and consume information. As Eric Jackson of Forbes noted, in the future it may even be a “Google killer.”

Of this we can be certain: Siri is the latest (though certainly not the last) game changer in Internet search, and it has certainly begun to change people’s expectations about both the process and the results of search. The search box, once needed to connect us with information on the web, is dead or dying. In its place is an application that feels intuitive and personal. Siri has become a near-indispensible entry point, and search engines are merely the back-end. And while a new feature, Siri’s expansion is inevitable. In fact, it is rumored that Apple is diligently working on Siri-enabled televisions – an entirely new market for the company.

The past six months have also brought the convergence of social media and search engines, as first Bing and more recently Google have incorporated information from a social network into their search results. Again we see technology adapting and responding to the once-unimagined way individuals find, analyze and accept information. Instead of relying on traditional, mechanical search results and the opinions of strangers, this new convergence allows users to find data and receive input directly from people in their social world, offering results curated by friends and associates.

As Social networks become more integrated with the Internet at large, reviews from trusted contacts will continue to change the way that users search for information. As David Worlock put it in a post titled, “Decline and Fall of the Google Empire,” “Facebook and its successors become the consumer research environment. Search by asking someone you know, or at least have a connection with, and get recommendations and references which take you right to the place where you buy.” The addition of social data to search results lends a layer of novel, trusted data to users’ results. Search Engine Land’s Danny Sullivan agreed writing, “The new system will perhaps make life much easier for some people, allowing them to find both privately shared content from friends and family plus material from across the web through a single search, rather than having to search twice using two different systems.”It only makes sense, from a competition perspective, that Google followed suit and recently merged its social and search data in an effort to make search more relevant and personal.

Inevitably, a host of Google’s critics and competitors has cried foul. In fact, as Google has adapted and evolved from its original template to offer users not only links to URLs but also maps, flight information, product pages, videos and now social media inputs, it has met with a curious resistance at every turn. And, indeed, judged against a world in which Internet search is limited to “ten blue links,” with actual content – answers to questions – residing outside of Google’s purview, it has significantly expanded its reach and brought itself (and its large user base) into direct competition with a host of new entities.

But the worldview that judges these adaptations as unwarranted extensions of Google’s platform from its initial baseline, itself merely a function of the relatively limited technology and nascent consumer demand present at the firm’s inception, is dangerously crabbed. By challenging Google’s evolution as “leveraging its dominance” into new and distinct markets, rather than celebrating its efforts (and those of Apple, Bing and Facebook, for that matter) to offer richer, more-responsive and varied forms of information, this view denies the essential reality of technological evolution and exalts outdated technology and outmoded business practices.

And while Google’s forays into the protected realms of others’ business models grab the headlines, it is also feverishly working to adapt its core technology, as well, most recently (and ambitiously) with its “Google Knowledge Graph” project, aimed squarely at transforming the algorithmic guts of its core search function into something more intelligent and refined than its current word-based index permits. In concept, this is, in fact, no different than its efforts to bootstrap social network data into its current structure: Both are efforts to improve on the mechanical process built on Google’s PageRank technology to offer more relevant search results informed by a better understanding of the mercurial way people actually think.

Expanding consumer welfare requires that Google, like its ever-shifting roster of competitors, must be able to keep up with the pace and the unanticipated twists and turns of innovation. As The Economist recently said, “Kodak was the Google of its day,” and the analogy is decidedly apt. Without the drive or ability to evolve and reinvent itself, its products and its business model, Kodak has fallen to its competitors in the marketplace. Once revered as a powerhouse of technological innovation for most of its history, Kodak now faces bankruptcy because it failed to adapt to its own success. Having invented the digital camera, Kodak radically altered the very definition of its market. But by hewing to its own metaphorical ten blue links – traditional film – instead of understanding that consumer photography had come to mean something dramatically different, Kodak consigned itself to failure.

Like Kodak and every other technology company before it, Google must be willing and able to adapt and evolve; just as for Lewis Carol’s Red Queen, “here it takes all the running you can do, to keep in the same place.” Neither consumers nor firms are well served by regulatory policy informed by nostalgia. Even more so than Kodak, Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters. If regulators force it to stop running, the market will simply pass it by.

[Cross posted at Forbes]

In my last post, I discussed Edelman & Lockwood’s (E&L’s) attempt to catch search engines in the act of biasing their results—as well as their failure to actually do so.  In this post, I present my own results from replicating their study.  Unlike E&L, I find that Bing is consistently more biased than Google, for reasons discussed further below, although neither engine references its own content as frequently as E&L suggest.

I ran searches for E&L’s original 32 non-random queries using three different search engines—Google, Bing, and Blekko—between June 23 and July 5 of this year.  This replication is useful, as search technology has changed dramatically since E&L recorded their results in August 2010.  Bing now powers Yahoo, and Blekko has had more time to mature and enhance its results.  Blekko serves as a helpful “control” engine in my study, as it is totally independent of Google and Microsoft, and so has no incentive to refer to Google or Microsoft content unless it is actually relevant to users.  In addition, because Blekko’s model is significantly different than Google and Microsoft’s, if results on all three engines agree that specific content is highly relevant to the user query, it lends significant credibility to the notion that the content places well on the merits rather than being attributable to bias or other factors.

How Do Search Engines Rank Their Own Content?

Focusing solely upon the first position, Google refers to its own products or services when no other search engine does in 21.9% of queries; in another 21.9% of queries, both Google and at least one other search engine rival (i.e. Bing or Blekko) refer to the same Google content with their first links.

But restricting focus upon the first position is too narrow.  Assuming that all instances in which Google or Bing rank their own content first and rivals do not amounts to bias would be a mistake; such a restrictive definition would include cases in which all three search engines rank the same content prominently—agreeing that it is highly relevant—although not all in the first position. 

The entire first page of results provides a more informative comparison.  I find that Google and at least one other engine return Google content on the first page of results in 7% of the queries.  Google refers to its own content on the first page of results without agreement from either rival search engine in only 7.9% of the queries.  Meanwhile, Bing and at least one other engine refer to Microsoft content in 3.2% of the queries.  Bing references Microsoft content without agreement from either Google or Blekko in 13.2% of the queries:

This evidence indicates that Google’s ranking of its own content differs significantly from its rivals in only 7.9% of queries, and that when Google ranks its own content prominently it is generally perceived as relevant.  Further, these results suggest that Bing’s organic search results are significantly more biased in favor of Microsoft content than Google’s search results are in favor of Google’s content.

Examining Search Engine “Bias” on Google

The following table presents the percentages of queries for which Google’s ranking of its own content differs significantly from its rivals’ ranking of that same content.

Note that percentages below 50 in this table indicate that rival search engines generally see the referenced Google content as relevant and independently believe that it should be ranked similarly.

So when Google ranks its own content highly, at least one rival engine typically agrees with this ranking; for example, when Google places its own content in its Top 3 results, at least one rival agrees with this ranking in over 70% of queries.  Bing especially agrees with Google’s rankings of Google content within its Top 3 and 5 results, failing to include Google content that Google ranks similarly in only a little more than a third of queries.

Examining Search Engine “Bias” on Bing

Bing refers to Microsoft content in its search results far more frequently than its rivals reference the same Microsoft content.  For example, Bing’s top result references Microsoft content for 5 queries, while neither Google nor Blekko ever rank Microsoft content in the first position:

This table illustrates the significant discrepancies between Bing’s treatment of its own Microsoft content relative to Google and Blekko.  Neither rival engine refers to Microsoft content Bing ranks within its Top 3 results; Google and Blekko do not include any Microsoft content Bing refers to on the first page of results in nearly 80% of queries.

Moreover, Bing frequently ranks Microsoft content highly even when rival engines do not refer to the same content at all in the first page of results.  For example, of the 5 queries for which Bing ranks Microsoft content in its top result, Google refers to only one of these 5 within its first page of results, while Blekko refers to none.  Even when comparing results across each engine’s full page of results, Google and Blekko only agree with Bing’s referral of Microsoft content in 20.4% of queries.

Although there are not enough Bing data to test results in the first position in E&L’s sample, Microsoft content appears as results on the first page of a Bing search about 7 times more often than Microsoft content appears on the first page of rival engines.  Also, Google is much more likely to refer to Microsoft content than Blekko, though both refer to significantly less Microsoft content than Bing.

A Closer Look at Google v. Bing

On E&L’s own terms, Bing results are more biased than Google results; rivals are more likely to agree with Google’s algorithmic assessment (than with Bing’s) that its own content is relevant to user queries.  Bing refers to Microsoft content other engines do not rank at all more often than Google refers its own content without any agreement from rivals.  Figures 1 and 2 display the same data presented above in order to facilitate direct comparisons between Google and Bing.

As Figures 1 and 2 illustrate, Bing search results for these 32 queries are more frequently “biased” in favor of its own content than are Google’s.  The bias is greatest for the Top 1 and Top 3 search results.

My study finds that Bing exhibits far more “bias” than E&L identify in their earlier analysis.  For example, in E&L’s study, Bing does not refer to Microsoft content at all in its Top 1 or Top 3 results; moreover, Bing refers to Microsoft content within its entire first page 11 times, while Google and Yahoo refer to Microsoft content 8 and 9 times, respectively.  Most likely, the significant increase in Bing’s “bias” differential is largely a function of Bing’s introduction of localized and personalized search results and represents serious competitive efforts on Bing’s behalf.

Again, it’s important to stress E&L’s limited and non-random sample, and to emphasize the danger of making strong inferences about the general nature or magnitude of search bias based upon these data alone.  However, the data indicate that Google’s own-content bias is relatively small even in a sample collected precisely to focus upon the queries most likely to generate it.  In fact—as I’ll discuss in my next post—own-content bias occurs even less often in a more representative sample of queries, strongly suggesting that such bias does not raise the competitive concerns attributed to it.

Last week I linked to my new study on “search bias.”  At the time I noted I would have a few blog posts in the coming days discussing the study.  This is the first of those posts.

A lot of the frenzy around Google turns on “search bias,” that is, instances when Google references its own links or its own content (such as Google Maps or YouTube) in its search results pages.  Some search engine critics condemn such references as inherently suspect and almost by their very nature harmful to consumers.  Yet these allegations suffer from several crucial shortcomings.  As I’ve noted (see, e.g., here and here), these naked assertions of discrimination are insufficient to state a cognizable antitrust claim, divorced as they are from consumer welfare analysis.  Indeed, such “discrimination” (some would call it “vertical integration”) has a well-recognized propensity to yield either pro-competitive or competitively neutral outcomes, rather than concrete consumer welfare losses.  Moreover, because search engines exist in an incredibly dynamic environment, marked by constant innovation and fierce competition, we would expect different engines, utilizing different algorithms and appealing to different consumer preferences, to emerge.  So when search engines engage in product differentiation of this sort, there is no reason to be immediately suspicious of these business decisions.

No reason to be immediately suspicious – but there could, conceivably, be a problem.  If there is, we would want to see empirical evidence of it—of both the existence of bias, as well as the consumer harm emanating from it.  But one of the most notable features of this debate is the striking lack of empirical data.  Surprisingly little research has been done in this area, despite frequent assertions that own-content bias is commonly practiced and poses a significant threat to consumers (see, e.g., here).

My paper is an attempt to rectify this.  In the paper, I investigate the available data to determine whether and to what extent own-content bias actually occurs, by analyzing and replicating a study by Ben Edelman and Ben Lockwood (E&L) and conducting my own study of a larger, randomized set of search queries.

In this post I discuss my analysis and critique of E&L; in future posts I’ll present my own replication of their study, as well as the results of my larger study of 1,000 random search queries.  Finally, I’ll analyze whether any of these findings support anticompetitive foreclosure theories or are otherwise sufficient to warrant antitrust intervention.

E&L “investigate . . . [w]hether search engines’ algorithmic results favor their own services, and if so, which search engines do most, to what extent, and in what substantive areas.”  Their approach is to measure the difference in how frequently search engines refer to their own content relative to how often their rivals do so.

One note at the outset:  While this approach provides useful descriptive facts about the differences between how search engines link to their own content, it does little to inform antitrust analysis because Edelman and Lockwood begin with the rather odd claim that competition among differentiated search engines for consumers is a puzzle that creates an air of suspicion around the practice—in fact, they claim that “it is hard to see why results would vary . . . across search engines.”  This assertion, of course, is simply absurd.  Indeed, Danny Sullivan provides a nice critique of this claim:

It’s not hard to see why search engine result differ at all.  Search engines each use their own “algorithm” to cull through the pages they’ve collected from across the web, to decide which pages to rank first . . . . Google has a different algorithm than Bing.  In short, Google will have a different opinion than Bing.  Opinions in the search world, as with the real world, don’t always agree.

Moreover, this assertion completely discounts both the vigorous competitive product differentiation that occurs in nearly all modern product markets as well as the obvious selection effects at work in own-content bias (Google users likely prefer Google content).  This combination detaches E&L’s analysis from the consumer welfare perspective, and thus antitrust policy relevance, despite their claims to the contrary (and the fact that their results actually exhibit very little bias).

Several methodological issues undermine the policy relevance of E&L’s analysis.  First, they hand select 32 search queries and execute searches on Google, Bing, Yahoo, AOL and Ask.  This hand-selected non-random sample of 32 search queries cannot generate reliable inferences regarding the frequency of bias—a critical ingredient to understanding its potential competitive effects.  Indeed, E&L acknowledge their queries are chosen precisely because they are likely to return results including Google content (e.g., email, images, maps, video, etc.).

E&L analyze the top three organic search results for each query on each engine.  They find that 19% of all results across all five search engines refer to content affiliated with one of them.  They focus upon the first three organic results and report that Google refers to its own content in the first (“top”) position about twice as often as Yahoo and Bing refer to Google content in this position.  Additionally, they note that Yahoo is more biased than Google when evaluating the first page rather than only the first organic search result.

E&L also offer a strained attempt to deal with the possibility of competitive product differentiation among search engines.  They examine differences among search engines’ references to their own content by “compar[ing] the frequency with which a search engine links to its own pages, relative to the frequency with which other search engines link to that search engine’s pages.”  However, their evidence undermines claims that Google’s own-content bias is significant and systematic relative to its rivals’.  In fact, almost zero evidence of statistically significant own-content bias by Google emerges.

E&L find, in general, Google is no more likely to refer to its own content than other search engines are to refer to that same content, and across the vast majority of their results, E&L find Google search results are not statistically more likely to refer to Google content than rivals’ search results.

The same data can be examined to test the likelihood that a search engine will refer to content affiliated with a rival search engine.  Rather than exhibiting bias in favor of an engine’s own content, a “biased” search engine might conceivably be less likely to refer to content affiliated with its rivals.  The table below reports the likelihood (in odds ratios) that a search engine’s content appears in a rival engine’s results.

The first two columns of the table demonstrate that both Google and Yahoo content are referred to in the first search result less frequently in rivals’ search results than in their own.  Although Bing does not have enough data for robust analysis of results in the first position in E&L’s original analysis, the next three columns in Table 1 illustrate that all three engines’ (Google, Yahoo, and Bing) content appears less often on the first page of rivals’ search results than on their own search engine.  However, only Yahoo’s results differ significantly from 1.  As between Google and Bing, the results are notably similar.

E&L also make a limited attempt to consider the possibility that favorable placement of a search engine’s own content is a response to user preferences rather than anticompetitive motives.  Using click-through data, they find, unsurprisingly, that the first search result tends to receive the most clicks (72%, on average).  They then identify one search term for which they believe bias plays an important role in driving user traffic.  For the search query “email,” Google ranks its own Gmail first and Yahoo Mail second; however, E&L also find that Gmail receives only 29% of clicks while Yahoo Mail receives 54%.  E&L claim that this finding strongly indicates that Google is engaging in conduct that harms users and undermines their search experience.

However, from a competition analysis perspective, that inference is not sound.  Indeed, the fact that the second-listed Yahoo Mail link received the majority of clicks demonstrates precisely that Yahoo was not competitively foreclosed from access to users.  Taken collectively, E&L are not able to muster evidence of potential competitive foreclosure.

While it’s important to have an evidence-based discussion surrounding search engine results and their competitive implications, it’s also critical to recognize that bias alone is not evidence of competitive harm.  Indeed, any identified bias must be evaluated in the appropriate antitrust economic context of competition and consumers, rather than individual competitors and websites.  E&L’s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content.  But, taken at face value, their results actually demonstrate little or no evidence of bias—let alone that the little bias they do find is causing any consumer harm.

As I’ll discuss in coming posts, evidence gathered since E&L conducted their study further suggests their claims that bias is prevalent, inherently harmful, and sufficient to warrant antitrust intervention are overstated and misguided.

Our search neutrality paper has received some recent attention.  While the initial response from Gordon Crovitz in the Wall Street Journal was favorable, critics are now voicing their responses.  Although we appreciate FairSearch’s attempt to engage with our paper’s central claims, its response is really little more than an extended non-sequitur and fails to contribute to the debate meaningfully.

Unfortunately, FairSearch grossly misstates our arguments and, in the process, basic principles of antitrust law and economics.  Accordingly, we offer a brief reply to correct a few of the most critical flaws, point out several quotes in our paper that FairSearch must have overlooked when they were characterizing our argument, and set straight FairSearch’s various economic and legal misunderstandings.

We want to begin by restating the simple claims that our paper does—and does not—make.

Our fundamental argument is that claims that search discrimination is anticompetitive are properly treated skeptically because:  (1) discrimination (that is, presenting or ranking a search engine’s own or affiliated content more prevalently than its rivals’ in response to search queries) arises from vertical integration in the search engine market (i.e., Google responds to a query by providing not only “10 blue links” but also perhaps a map or video created Google or previously organized on a Google-affiliated site (YouTube, e.g.)); (2) both economic theory and evidence demonstrate that such integration is generally pro-competitive; and (3) in Google’s particular market, evidence of intense competition and constant innovation abounds, while evidence of harm to consumers is entirely absent.  In other words, it is much more likely than not that search discrimination is pro-competitive rather than anticompetitive, and doctrinal error cost concerns accordingly counsel great hesitation in any antitrust intervention, administrative or judicial.  As we will discuss, these are claims that FairSearch’s lawyers are quite familiar with.

FairSearch, however, grossly mischaracterizes these basic points, asserting instead that we claim

 “that even if Google does [manipulate its search results], this should be immune from antitrust enforcement due to the difficulty of identifying ‘bias’ and the risks of regulating benign conduct.”

This statement is either intentionally deceptive or betrays a shocking misunderstanding of our central claim for at least two reasons: (1) we never advocate for complete antitrust immunity, and (2) it trivializes the very real—and universally-accepted–difficulty of distinguishing between pro- and anticompetitive conduct.

First, we acknowledge the obvious point that as a theoretical matter discrimination can amount to an antitrust violation in some cases under certain specific circumstances—not the least important of which is proof of actual competitive harm.  To quote ourselves:

The key question is whether such a bias benefits consumers or inflicts competitive harm.  Economic theory has long understood the competitive benefits of such vertical integration; modern economic theory also teaches that, under some conditions, vertical integration and contractual arrangements can create a potential for competitive harm that must be weighed against those benefits . . . .  From a policy perspective, the issue is whether some sort of ex ante blanket prohibition or restriction on vertical integration is appropriate instead of an ex post, fact-intensive evaluation on a case-by-case basis, such as under antitrust law. (Manne and Wright, 2011) (emphasis added).

This is not much of a concession.  While FairSearch tries to move the goalposts by focusing on a straw man proposition that search bias is categorically immune from antitrust scrutiny, this sleight of hand doesn’t accomplish much and reveals what FairSearch is missing.   After all, consider that almost every single form of business conduct can be an antitrust violation under some set of conditions!  The antitrust laws apply in principle to (that is, do not categorically make immune) horizontal mergers, vertical mergers, long-term contracts, short-term contracts, exclusive dealing, partial exclusive dealing, burning down a rival’s factory, dealing with rivals, refusing to dealing with rivals, boycotts, tying contracts, overlapping boards, and all manner of pricing practices.  Indeed, it is hard to find categories of business conduct that are outright immune from the antitrust laws.  So—we agree:  “Search bias” can conceivably be anticompetitive.  Unfortunately for FairSearch, we never said otherwise and it’s not a very interesting point to discuss.

With that point firmly established, one can return focus to the topic FairSearch painstakingly avoids throughout its response and on which we think the issue really does (and should) turn: Where’s the proof of consumer harm?

Continue Reading…

In the last post, I discussed possible characterizations of Google’s conduct for purposes of antitrust analysis.  A firm grasp of the economic implications of the different conceptualizations of Google’s conduct is a necessary – but not sufficient – precondition for appreciating the inconsistencies underlying the proposed remedies for Google’s alleged competitive harms.  In this post, I want to turn to a different question: assuming arguendo a competitive problem associated with Google’s algorithmic rankings – an assumption I do not think is warranted, supported by the evidence, or even consistent with the relevant literature on vertical contractual relationships – how might antitrust enforcers conceive of an appropriate and consumer-welfare-conscious remedy?  Antitrust agencies, economists, and competition policy scholars have all appropriately stressed the importance of considering a potential remedy prior to, rather than following, an antitrust investigation; this is good advice not only because of the benefits of thinking rigorously and realistically about remedial design, but also because clear thinking about remedies upfront might illuminate something about the competitive nature of the conduct at issue.

Somewhat ironically, former DOJ Antitrust Division Assistant Attorney General Tom Barnett – now counsel for Expedia, one of the most prominent would-be antitrust plaintiffs against Google – warned (in his prior, rather than his present, role) that “[i]mplementing a remedy that is too broad runs the risk of distorting markets, impairing competition, and prohibiting perfectly legal and efficient conduct,” and that “forcing a firm to share the benefits of its investments and relieving its rivals of the incentive to develop comparable assets of their own, access remedies can reduce the competitive vitality of an industry.”  Barnett also noted that “[t]here seems to be consensus that we should prohibit unilateral conduct only where it is demonstrated through rigorous economic analysis to harm competition and thereby harm consumer welfare.”  Well said.  With these warnings well in-hand, we must turn to two inter-related concerns necessary to appreciating the potential consequences of a remedy for Google’s conduct: (1) the menu of potential remedies available for an antitrust suit against Google, and (2) the efficacy of these potential remedies from a consumer-welfare, rather than firm-welfare, perspective.

What are the potential remedies?

The burgeoning search neutrality crowd presents no lack of proposed remedies; indeed, if there is one segment in which Google’s critics have proven themselves prolific, it is in their constant ingenuity conceiving ways to bring governmental intervention to bear upon Google.  Professor Ben Edelman has usefully aggregated and discussed several of the alternatives, four of which bear mention:  (1) a la Frank Pasquale and Oren Bracha, the creation of a “Federal Search Commission,” (2) a la the regulations surrounding the Customer Reservation Systems (CRS) in the 1990s, a prohibition on rankings that order listings “us[ing] any factors directly or indirectly relating to” whether the search engine is affiliated with the link, (3) mandatory disclosure of all manual adjustments to algorithmic search, and (4) transfer of the “browser choice” menu of the EC Microsoft litigation to the Google search context, requiring Google to offer users a choice of five or so rivals whenever a user enters particular queries.

Geoff and I discuss several of these potential remedies in our paper, If Search Neutrality is the Answer, What’s the Question?  It suffices to say that we find significant consumer welfare threats from the creation of a new regulatory agency designed to impose “neutral” search results.  For now, I prefer to focus on the second of these remedies – analogized to CRS technology in the 1990s – here; Professor Edelman not only explains proposed CRS-inspired regulation, but does so in effusive terms:

A first insight comes from recognizing that regulators have already – successfully! – addressed the problem of bias in information services. One key area of intervention was customer reservation systems (CRS’s), the computer networks that let travel agents see flight availability and pricing for various major airlines. Three decades ago, when CRS’s were largely owned by the various airlines, some airlines favored their own flights. For example, when a travel agent searched for flights through Apollo, a CRS then owned by United Airlines, United flights would come up first – even if other carriers offered lower prices or nonstop service. The Department of Justice intervened, culminating in rules prohibiting any CRS owned by an airline from ordering listings “us[ing] any factors directly or indirectly relating to carrier identity” (14 CFR 255.4). Certainly one could argue that these rules were an undue intrusion: A travel agent was always free to find a different CRS, and further additional searches could have uncovered alternative flights. Yet most travel agents hesitated to switch CRS’s, and extra searches would be both time-consuming and error-prone. Prohibiting biased listings was the better approach.

The same principle applies in the context of web search. On this theory, Google ought not rank results by any metric that distinctively favors Google. I credit that web search considers myriad web sites – far more than the number of airlines, flights, or fares. And I credit that web search considers more attributes of each web page – not just airfare price, transit time, and number of stops. But these differences only grant a search engine more room to innovate. These differences don’t change the underlying reasoning, so compelling in the CRS context, that a system provider must not design its rules to systematically put itself first.

The analogy is a superficially attractive one, and we’re tempted to entertain it, so far as it goes.  Organizational questions inhere in both settings, and similarly so: both flights and search results must be ordinally ranked, and before CRS regulation, a host airline’s flights often appeared before those of rival airlines.  Indeed, we will take Edelman’s analogy at face value.  Problematically for Professor Edelman and others pushing the CRS-style remedy, a fuller exploration of CRS regulation reveals this market intervention – well, put simply, wasn’t so successful after all.  Not for consumers anyway.  It did, however, generate (economically) predictable consequences: reduced consumer welfare through reduced innovation. Let’s explore the consequences of Edelman’s analogy further below the fold.

Continue Reading…

Josh and I have just completed a white paper on search neutrality/search bias and the regulation of search engines.  The paper is this year’s first in the ICLE Antitrust & Consumer Protection White Paper Series:

If Search Neutrality Is the Answer, What’s the Question?

Geoffrey A. Manne

(Lewis & Clark Law School and ICLE)


Joshua D. Wright

(George Mason Law School & Department of Economics and ICLE)

In this paper we evaluate both the economic and non-economic costs and benefits of search bias. In Part I we define search bias and search neutrality, terms that have taken on any number of meanings in the literature, and survey recent regulatory concerns surrounding search bias. In Part II we discuss the economics and technology of search. In Part III we evaluate the economic costs and benefits of search bias. We demonstrate that search bias is the product of the competitive process and link the search bias debate to the economic and empirical literature on vertical integration and the generally-efficient and pro-competitive incentives for a vertically integrated firm to discriminate in favor of its own content. Building upon this literature and its application to the search engine market, we conclude that neither an ex ante regulatory restriction on search engine bias nor the imposition of an antitrust duty to deal upon Google would benefit consumers. In Part V we evaluate the frequent claim that search engine bias causes other serious, though less tangible, social and cultural harms. As with the economic case for search neutrality, we find these non-economic justifications for restricting search engine bias unconvincing, and particularly susceptible to the well-known Nirvana Fallacy of comparing imperfect real world institutions with romanticized and unrealistic alternatives

Search bias is not a function of Google’s large share of overall searches. Rather, it is a feature of competition in the search engine market, as evidenced by the fact that its rivals also exercise editorial and algorithmic control over what information is provided to consumers and in what manner. Consumers rightly value competition between search engine providers on this margin; this fact alone suggests caution in regulating search bias at all, much less with an ex ante regulatory schema which defines the margins upon which search providers can compete. The strength of economic theory and evidence demonstrating that regulatory restrictions on vertical integration are costly to consumers, impede innovation, and discourage experimentation in a dynamic marketplace support the conclusion that neither regulation of search bias nor antitrust intervention can be justified on economic terms. Search neutrality advocates touting the non-economic virtues of their proposed regime should bear the burden of demonstrating that they exist beyond the Nirvana Fallacy of comparing an imperfect private actor to a perfect government decision-maker, and further, that any such benefits outweigh the economic costs.