My last two posts on search bias (here and here) have analyzed and critiqued Edelman & Lockwood’s small study on search bias. This post extends this same methodology and analysis to a random sample of 1,000 Google queries (released by AOL in 2006), to develop a more comprehensive understanding of own-content bias. As I’ve stressed, these analyses provide useful—but importantly limited—glimpses into the nature of the search engine environment. While these studies are descriptively helpful, actual harm to consumer welfare must always be demonstrated before cognizable antitrust injuries arise. And naked identifications of own-content bias simply do not inherently translate to negative effects on consumers (see, e.g., here and here for more comprehensive discussion).
Now that’s settled, let’s jump into the results of the 1,000 random search query study.
How Do Search Engines Rank Their Own Content?
Consistent with our earlier analysis, a starting off point for thinking about measuring differentiation among search engines with respect to placing their own content is to compare how a search engine ranks its own content relative to how other engines place that same content (e.g. to compare how Google ranks “Google Maps” relative to how Bing or Blekko rank it). Restricting attention exclusively to the first or “top” position, I find that Google simply does not refer to its own content in over 90% of queries. Similarly, Bing does not reference Microsoft content in 85.4% of queries. Google refers to its own content in the first position when other search engines do not in only 6.7% of queries; while Bing does so over twice as often, referencing Microsoft content that no other engine references in the first position in 14.3% of queries. The following two charts illustrate the percentage of Google or Bing first position results, respectively, dedicated to own content across search engines.
The most striking aspect of these results is the small fraction of queries for which placement of own-content is relevant. The results are similar when I expand consideration to the entire first page of results; interestingly, however, while the levels of own-content bias are similar considering the entire first page of results, Bing is far more likely than Google to reference its own content in its very first results position.
Examining Search Engine “Bias” on Google
Two distinct differences between the results of this larger study and my replication of Edelman & Lockwood emerge: (1) Google and Bing refer to their own content in a significantly smaller percentage of cases here than in the non-random sample; and (2) in general, when Google or Bing does rank its own content highly, rival engines are unlikely to similarly rank that same content.
The following table reports the percentages of queries for which Google’s ranking of its own content and its rivals’ rankings of that same content differ significantly. When Google refers to its own content within its Top 5 results, at least one other engine similarly ranks this content for only about 5% of queries.
The following table presents the likelihood that Google content will appear in a Google search, relative to searches conducted on rival engines (reported in odds ratios).
The first and third columns report results indicating that Google affiliated content is more likely to appear in a search executed on Google rather than rival engines. Google is approximately 16 times more likely to refer to its own content on its first page as is any other engine. Bing and Blekko are both significantly less likely to refer to Google content in their first result or on their first page than Google is to refer to Google content within these same parameters. In each iteration, Bing is more likely to refer to Google content than is Blekko, and in the case of the first result, Bing is much more likely to do so. Again, to be clear, the fact that Bing is more likely to rank its own content is not suggestive that the practice is problematic. Quite the contrary, the demonstration that firms both with and without market power in search (to the extent that is a relevant antitrust market) engage in similar conduct the correct inference is that there must be efficiency explanations for the practice. The standard response, of course, is that the competitive implications of a practice are different when a firm with market power does it. That’s not exactly right. It is true that firms with market power can engage in conduct that gives rise to potential antitrust problems when the same conduct from a firm without market power would not; however, when firms without market power engage in the same business practice it demands that antitrust analysts seriously consider the efficiency implications of the practice. In other words, there is nothing in the mantra that things are “different” when larger firms do them that undercut potential efficiency explanations.
Examining Search Engine “Bias” on Bing
For queries within the larger sample, Bing refers to Microsoft content within its Top 1 and 3 results when no other engine similarly references this content for a slightly smaller percentage of queries than in my Edelman & Lockwood replication. Yet Bing continues to exhibit a strong tendency to rank Microsoft content more prominently than rival engines. For example, when Bing refers to Microsoft content within its Top 5 results, other engines agree with this ranking for less than 2% of queries; and Bing refers to Microsoft content that no other engine does within its Top 3 results for 99.2% of queries:
Regression analysis further illustrates Bing’s propensity to reference Microsoft content that rivals do not. The following table reports the likelihood that Microsoft content is referred to in a Bing search as compared to searches on rival engines (again reported in odds ratios).
Bing refers to Microsoft content in its first results position about 56 times more often than rival engines refer to Microsoft content in this same position. Across the entire first page, Microsoft content appears on a Bing search about 25 times more often than it does on any other engine. Both Google and Blekko are accordingly significantly less likely to reference Microsoft content. Notice further that, contrary to the findings in the smaller study, Google is slightly less likely to return Microsoft content than is Blekko, both in its first results position and across its entire first page.
A Closer Look at Google v. Bing
Consistent with the smaller sample, I find again that Bing is more biased than Google using these metrics. In other words, Bing ranks its own content significantly more highly than its rivals do more frequently then Google does, although the discrepancy between the two engines is smaller here than in the study of Edelman & Lockwood’s queries. As noted above, Bing is over twice as likely to refer to own content in first results position than is Google.
Figures 7 and 8 present the same data reported above, but with Blekko removed, to allow for a direct visual comparison of own-content bias between Google and Bing.
Consistent with my earlier results, Bing appears to consistently rank Microsoft content higher than Google ranks the same (Microsoft) content more frequently than Google ranks Google content more prominently than Bing ranks the same (Google) content.
This result is particularly interesting given the strength of the accusations condemning Google for behaving in precisely this way. That Bing references Microsoft content just as often as—and frequently even more often than!—Google references its own content strongly suggests that this behavior is a function of procompetitive product differentiation, and not abuse of market power. But I’ll save an in-depth analysis of this issue for my next post, where I’ll also discuss whether any of the results reported in this series of posts support anticompetitive foreclosure theories or otherwise suggest antitrust intervention is warranted.