Investigating Search Bias: Measuring Edelman & Lockwood’s Failure to Measure Bias in Search

Cite this Article
Joshua D. Wright, Investigating Search Bias: Measuring Edelman & Lockwood’s Failure to Measure Bias in Search, Truth on the Market (November 08, 2011), https://truthonthemarket.com/2011/11/08/investigating-search-bias-measuring-edelman-lockwoods-failure-to-measure-bias-in-search/

Last week I linked to my new study on “search bias.”  At the time I noted I would have a few blog posts in the coming days discussing the study.  This is the first of those posts.

A lot of the frenzy around Google turns on “search bias,” that is, instances when Google references its own links or its own content (such as Google Maps or YouTube) in its search results pages.  Some search engine critics condemn such references as inherently suspect and almost by their very nature harmful to consumers.  Yet these allegations suffer from several crucial shortcomings.  As I’ve noted (see, e.g., here and here), these naked assertions of discrimination are insufficient to state a cognizable antitrust claim, divorced as they are from consumer welfare analysis.  Indeed, such “discrimination” (some would call it “vertical integration”) has a well-recognized propensity to yield either pro-competitive or competitively neutral outcomes, rather than concrete consumer welfare losses.  Moreover, because search engines exist in an incredibly dynamic environment, marked by constant innovation and fierce competition, we would expect different engines, utilizing different algorithms and appealing to different consumer preferences, to emerge.  So when search engines engage in product differentiation of this sort, there is no reason to be immediately suspicious of these business decisions.

No reason to be immediately suspicious – but there could, conceivably, be a problem.  If there is, we would want to see empirical evidence of it—of both the existence of bias, as well as the consumer harm emanating from it.  But one of the most notable features of this debate is the striking lack of empirical data.  Surprisingly little research has been done in this area, despite frequent assertions that own-content bias is commonly practiced and poses a significant threat to consumers (see, e.g., here).

My paper is an attempt to rectify this.  In the paper, I investigate the available data to determine whether and to what extent own-content bias actually occurs, by analyzing and replicating a study by Ben Edelman and Ben Lockwood (E&L) and conducting my own study of a larger, randomized set of search queries.

In this post I discuss my analysis and critique of E&L; in future posts I’ll present my own replication of their study, as well as the results of my larger study of 1,000 random search queries.  Finally, I’ll analyze whether any of these findings support anticompetitive foreclosure theories or are otherwise sufficient to warrant antitrust intervention.

E&L “investigate . . . [w]hether search engines’ algorithmic results favor their own services, and if so, which search engines do most, to what extent, and in what substantive areas.”  Their approach is to measure the difference in how frequently search engines refer to their own content relative to how often their rivals do so.

One note at the outset:  While this approach provides useful descriptive facts about the differences between how search engines link to their own content, it does little to inform antitrust analysis because Edelman and Lockwood begin with the rather odd claim that competition among differentiated search engines for consumers is a puzzle that creates an air of suspicion around the practice—in fact, they claim that “it is hard to see why results would vary . . . across search engines.”  This assertion, of course, is simply absurd.  Indeed, Danny Sullivan provides a nice critique of this claim:

It’s not hard to see why search engine result differ at all.  Search engines each use their own “algorithm” to cull through the pages they’ve collected from across the web, to decide which pages to rank first . . . . Google has a different algorithm than Bing.  In short, Google will have a different opinion than Bing.  Opinions in the search world, as with the real world, don’t always agree.

Moreover, this assertion completely discounts both the vigorous competitive product differentiation that occurs in nearly all modern product markets as well as the obvious selection effects at work in own-content bias (Google users likely prefer Google content).  This combination detaches E&L’s analysis from the consumer welfare perspective, and thus antitrust policy relevance, despite their claims to the contrary (and the fact that their results actually exhibit very little bias).

Several methodological issues undermine the policy relevance of E&L’s analysis.  First, they hand select 32 search queries and execute searches on Google, Bing, Yahoo, AOL and Ask.  This hand-selected non-random sample of 32 search queries cannot generate reliable inferences regarding the frequency of bias—a critical ingredient to understanding its potential competitive effects.  Indeed, E&L acknowledge their queries are chosen precisely because they are likely to return results including Google content (e.g., email, images, maps, video, etc.).

E&L analyze the top three organic search results for each query on each engine.  They find that 19% of all results across all five search engines refer to content affiliated with one of them.  They focus upon the first three organic results and report that Google refers to its own content in the first (“top”) position about twice as often as Yahoo and Bing refer to Google content in this position.  Additionally, they note that Yahoo is more biased than Google when evaluating the first page rather than only the first organic search result.

E&L also offer a strained attempt to deal with the possibility of competitive product differentiation among search engines.  They examine differences among search engines’ references to their own content by “compar[ing] the frequency with which a search engine links to its own pages, relative to the frequency with which other search engines link to that search engine’s pages.”  However, their evidence undermines claims that Google’s own-content bias is significant and systematic relative to its rivals’.  In fact, almost zero evidence of statistically significant own-content bias by Google emerges.

E&L find, in general, Google is no more likely to refer to its own content than other search engines are to refer to that same content, and across the vast majority of their results, E&L find Google search results are not statistically more likely to refer to Google content than rivals’ search results.

The same data can be examined to test the likelihood that a search engine will refer to content affiliated with a rival search engine.  Rather than exhibiting bias in favor of an engine’s own content, a “biased” search engine might conceivably be less likely to refer to content affiliated with its rivals.  The table below reports the likelihood (in odds ratios) that a search engine’s content appears in a rival engine’s results.

The first two columns of the table demonstrate that both Google and Yahoo content are referred to in the first search result less frequently in rivals’ search results than in their own.  Although Bing does not have enough data for robust analysis of results in the first position in E&L’s original analysis, the next three columns in Table 1 illustrate that all three engines’ (Google, Yahoo, and Bing) content appears less often on the first page of rivals’ search results than on their own search engine.  However, only Yahoo’s results differ significantly from 1.  As between Google and Bing, the results are notably similar.

E&L also make a limited attempt to consider the possibility that favorable placement of a search engine’s own content is a response to user preferences rather than anticompetitive motives.  Using click-through data, they find, unsurprisingly, that the first search result tends to receive the most clicks (72%, on average).  They then identify one search term for which they believe bias plays an important role in driving user traffic.  For the search query “email,” Google ranks its own Gmail first and Yahoo Mail second; however, E&L also find that Gmail receives only 29% of clicks while Yahoo Mail receives 54%.  E&L claim that this finding strongly indicates that Google is engaging in conduct that harms users and undermines their search experience.

However, from a competition analysis perspective, that inference is not sound.  Indeed, the fact that the second-listed Yahoo Mail link received the majority of clicks demonstrates precisely that Yahoo was not competitively foreclosed from access to users.  Taken collectively, E&L are not able to muster evidence of potential competitive foreclosure.

While it’s important to have an evidence-based discussion surrounding search engine results and their competitive implications, it’s also critical to recognize that bias alone is not evidence of competitive harm.  Indeed, any identified bias must be evaluated in the appropriate antitrust economic context of competition and consumers, rather than individual competitors and websites.  E&L’s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content.  But, taken at face value, their results actually demonstrate little or no evidence of bias—let alone that the little bias they do find is causing any consumer harm.

As I’ll discuss in coming posts, evidence gathered since E&L conducted their study further suggests their claims that bias is prevalent, inherently harmful, and sufficient to warrant antitrust intervention are overstated and misguided.