Archives For Bing

After more than a year of complaining about Google and being met with responses from me (see also here, here, here, here, and here, among others) and many others that these complaints have yet to offer up a rigorous theory of antitrust injury — let alone any evidence — FairSearch yesterday offered up its preferred remedies aimed at addressing, in its own words, “the fundamental conflict of interest driving Google’s incentive and ability to engage in anti-competitive conduct. . . . [by putting an] end [to] Google’s preferencing of its own products ahead of natural search results.”  Nothing in the post addresses the weakness of the organization’s underlying claims, and its proposed remedies would be damaging to consumers.

FairSearch’s first and core “abuse” is “[d]iscriminatory treatment favoring Google’s own vertical products in a manner that may harm competing vertical products.”  To address this it proposes prohibiting Google from preferencing its own content in search results and suggests as additional, “structural remedies” “[r]equiring Google to license data” and “[r]equiring Google to divest its vertical products that have benefited from Google’s abuses.”

Tom Barnett, former AAG for antitrust, counsel to FairSearch member Expedia, and FairSearch’s de facto spokesman should be ashamed to be associated with claims and proposals like these.  He better than many others knows that harm to competitors is not the issue under US antitrust laws.  Rather, US antitrust law requires a demonstration that consumers — not just rivals — will be harmed by a challenged practice.  He also knows (as economists have known for a long time) that favoring one’s own content — i.e., “vertically integrating” to produce both inputs as well as finished products — is generally procompetitive.

In fact, Barnett has said as much before:

Because a Section 2 violation hurts competitors, they are often the focus of section 2 remedial efforts.  But competitor well-being, in itself, is not the purpose of our antitrust laws.

Access remedies also raise efficiency and innovation concerns.  By forcing a firm to share the benefits of its investments and relieving its rivals of the incentive to develop comparable assets of their own, access remedies can reduce the competitive vitality of an industry.

Not only has FairSearch not actually demonstrated that Google has preferenced its own products, the organization has also not demonstrated either harm to consumers arising from such conduct nor even antitrust-cognizable harm to competitors arising from it.

As an empirical study supported by the International Center for Law and Economics (itself, in turn, supported in part by Google, and of which I am the Executive Director) makes clear, search bias simply almost never occurs.  And when it does, it is the non-dominant Bing that more often practices it, not Google.  Moreover, and most important, the evidence marshaled in favor of the search bias claim (largely adduced by Harvard Business School professor, Ben Edelman (whose work is supported by Microsoft)) demonstrates that consumers do, indeed, have the ability to detect and counter allegedly biased results.

Recall what search bias means in this context.  According to Edelman, looking at the top three search results, Google links to its own content (think Gmail, Google Maps, etc.) in the first search result about twice as often as Yahoo! and Bing link to Google content in this position.  While the ICLE paper refutes even this finding, notice what it means:  “Biased” search results lead to a reshuffling of results among the top few results offered up; there is no evidence that Google simply drops users’ preferred results.  While it is true that the difference in click-through rates between the top and second results can be significant, Edelman’s own findings actually demonstrate that consumers are capable of finding what they want when their preferred (more relevant) results appears in the second or third slot.

Edelman notes that Google ranks Gmail first and Yahoo! Mail second in his study, even though users seem to think Yahoo! Mail is the more relevant result:  Gmail receives only 29% of clicks while Yahoo! Mail receives 54%.  According to Edelman, this is proof that Google’s conduct forecloses access by competitors and harms consumers under the antitrust laws.

But is it?  Note that users click on the second, apparently more-relevant result nearly twice as often as they click on the first.  This demonstrates that Yahoo! is not competitively foreclosed from access to users, and that users are perfectly capable of identifying their preferred results, even when they appear lower in the results page.  This is simply not foreclosure — in fact, if anything, it demonstrates the opposite.

Among other things, foreclosure — limiting access by a competitor to a necessary input — under the antitrust laws must be substantial enough to prevent a rival from reaching sufficient scale that it can effectively compete.  It is no more “foreclosure” for Google to “impair” traffic to Kayak’s site by offering its own Flight Search than it is for Safeway to refuse to allow Kroger to sell Safeway’s house brand.  Rather, actionable foreclosure requires that a firm “impair[s] the ability of rivals to grow into effective competitors that erode the firm’s position.”  Such quantifiable claims are noticeably absent from critic’s complaints against Google.

And what about those allegedly harmed competitors?  How are they faring?  As of September 2012, Google ranks 7th in visits among metasearch travel sites, with a paltry 1.4% of such visits.  Residing at number one?  FairSearch founding member, Kayak, with a whopping 61% (up from 52% six months after Google entered the travel search business).  Nextag.com, another vocal Google critic, has complained that Google’s conduct has forced it to shift its strategy from attracting traffic through Google’s organic search results to other sources, including paid ads on Google.com.  And how has it fared?  It has parlayed its experience with new data sources into a successful new business model, Wize Commerce, showing exactly the sort of “incentive to develop comparable assets of their own” Barnett worries will be destroyed by aggressive antitrust enforcement.  And Barnett’s own Expedia.com?  Currently, it’s the largest travel company in the world, and it has only grown in recent years.

Meanwhile consumers’ interests have been absent from critics’ complaints since the beginning.  And not only do they fail to demonstrate any connection between harm to consumers and the claimed harms to competitors arising from Google’s conduct, but they also ignore the harm to consumers that may result from restricting potentially efficient business conduct — like the integration of Google Maps and other products into its search results.  That Google not only produces search results but also owns some of the content that generates those results is not a problem cognizable by modern antitrust.

FairSearch and other Google critics have utterly failed to make a compelling case, and their proposed remedies would serve only to harm, not help, consumers.

In my series of three posts (here, here and here) drawn from my empirical study on search bias I have examined whether search bias exists, and, if so, how frequently it occurs.  This, the final post in the series, assesses the results of the study (as well as the Edelman & Lockwood (E&L) study to which it responds) to determine whether the own-content bias I’ve identified is in fact consistent with anticompetitive foreclosure or is otherwise sufficient to warrant antitrust intervention.

As I’ve repeatedly emphasized, while I refer to differences among search engines’ rankings of their own or affiliated content as “bias,” without more these differences do not imply anticompetitive conduct.  It is wholly unsurprising and indeed consistent with vigorous competition among engines that differentiation emerges with respect to algorithms.  However, it is especially important to note that the theories of anticompetitive foreclosure raised by Google’s rivals involve very specific claims about these differences.  Properly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.  Unfortunately for search engine critics, their theories fail on both counts.  The observed own-content bias appears neither to be extensive enough to prevent rivals from gaining access to distribution nor does it appear to target Google’s rivals; rather, it seems to be a natural result of intense competition between search engines and of significant benefit to consumers.

Vertical foreclosure arguments are premised upon the notion that rivals are excluded with sufficient frequency and intensity as to render their efforts to compete for distribution uneconomical.  Yet the empirical results simply do not indicate that market conditions are in fact conducive to the types of harmful exclusion contemplated by application of the antitrust laws.  Rather, the evidence indicates that (1) the absolute level of search engine “bias” is extremely low, and (2) “bias” is not a function of market power, but an effective strategy that has arisen as a result of serious competition and innovation between and by search engines.  The first finding undermines competitive foreclosure arguments on their own terms, that is, even if there were no pro-consumer justifications for the integration of Google content with Google search results.  The second finding, even more importantly, reveals that the evolution of consumer preferences for more sophisticated and useful search results has driven rival search engines to satisfy that demand.  Both Bing and Google have shifted toward these results, rendering the complained-of conduct equivalent to satisfying the standard of care in the industry–not restraining competition.

A significant lack of search bias emerges in the representative sample of queries.  This result is entirely unsurprising, given that bias is relatively infrequent even in E&L’s sample of queries specifically designed to identify maximum bias.  In the representative sample, the total percentage of queries for which Google references its own content when rivals do not is even lower—only about 8%—meaning that Google favors its own content far less often than critics have suggested.  This fact is crucial and highly problematic for search engine critics, as their burden in articulating a cognizable antitrust harm includes not only demonstrating that bias exists, but further that it is actually competitively harmful.  As I’ve discussed, bias alone is simply not sufficient to demonstrate any prima facie anticompetitive harm as it is far more often procompetitive or competitively neutral than actively harmful.  Moreover, given that bias occurs in less than 10% of queries run on Google, anticompetitive exclusion arguments appear unsustainable.

Indeed, theories of vertical foreclosure find virtually zero empirical support in the data.  Moreover, it appears that, rather than being a function of monopolistic abuse of power, search bias has emerged as an efficient competitive strategy, allowing search engines to differentiate their products in ways that benefit consumers.  I find that when search engines do reference their own content on their search results pages, it is generally unlikely that another engine will reference this same content.  However, the fact that both this percentage and the absolute level of own content inclusion is similar across engines indicates that this practice is not a function of market power (or its abuse), but is rather an industry standard.  In fact, despite conducting a much smaller percentage of total consumer searches, Bing is consistently more biased than Google, illustrating that the benefits search engines enjoy from integrating their own content into results is not necessarily a function of search engine size or volume of queries.  These results are consistent with a business practice that is efficient and at significant tension with arguments that such integration is designed to facilitate competitive foreclosure. Continue Reading…

My last two posts on search bias (here and here) have analyzed and critiqued Edelman & Lockwood’s small study on search bias.  This post extends this same methodology and analysis to a random sample of 1,000 Google queries (released by AOL in 2006), to develop a more comprehensive understanding of own-content bias.  As I’ve stressed, these analyses provide useful—but importantly limited—glimpses into the nature of the search engine environment.  While these studies are descriptively helpful, actual harm to consumer welfare must always be demonstrated before cognizable antitrust injuries arise.  And naked identifications of own-content bias simply do not inherently translate to negative effects on consumers (see, e.g., here and here for more comprehensive discussion).

Now that’s settled, let’s jump into the results of the 1,000 random search query study.

How Do Search Engines Rank Their Own Content?

Consistent with our earlier analysis, a starting off point for thinking about measuring differentiation among search engines with respect to placing their own content is to compare how a search engine ranks its own content relative to how other engines place that same content (e.g. to compare how Google ranks “Google Maps” relative to how Bing or Blekko rank it).   Restricting attention exclusively to the first or “top” position, I find that Google simply does not refer to its own content in over 90% of queries.  Similarly, Bing does not reference Microsoft content in 85.4% of queries.  Google refers to its own content in the first position when other search engines do not in only 6.7% of queries; while Bing does so over twice as often, referencing Microsoft content that no other engine references in the first position in 14.3% of queries.  The following two charts illustrate the percentage of Google or Bing first position results, respectively, dedicated to own content across search engines.

The most striking aspect of these results is the small fraction of queries for which placement of own-content is relevant.  The results are similar when I expand consideration to the entire first page of results; interestingly, however, while the levels of own-content bias are similar considering the entire first page of results, Bing is far more likely than Google to reference its own content in its very first results position.

Examining Search Engine “Bias” on Google

Two distinct differences between the results of this larger study and my replication of Edelman & Lockwood emerge: (1) Google and Bing refer to their own content in a significantly smaller percentage of cases here than in the non-random sample; and (2) in general, when Google or Bing does rank its own content highly, rival engines are unlikely to similarly rank that same content.

The following table reports the percentages of queries for which Google’s ranking of its own content and its rivals’ rankings of that same content differ significantly. When Google refers to its own content within its Top 5 results, at least one other engine similarly ranks this content for only about 5% of queries.

The following table presents the likelihood that Google content will appear in a Google search, relative to searches conducted on rival engines (reported in odds ratios).

The first and third columns report results indicating that Google affiliated content is more likely to appear in a search executed on Google rather than rival engines.  Google is approximately 16 times more likely to refer to its own content on its first page as is any other engine.  Bing and Blekko are both significantly less likely to refer to Google content in their first result or on their first page than Google is to refer to Google content within these same parameters.  In each iteration, Bing is more likely to refer to Google content than is Blekko, and in the case of the first result, Bing is much more likely to do so.  Again, to be clear, the fact that Bing is more likely to rank its own content is not suggestive that the practice is problematic.  Quite the contrary, the demonstration that firms both with and without market power in search (to the extent that is a relevant antitrust market) engage in similar conduct the correct inference is that there must be efficiency explanations for the practice.  The standard response, of course, is that the competitive implications of a practice are different when a firm with market power does it.  That’s not exactly right.  It is true that firms with market power can engage in conduct that gives rise to potential antitrust problems when the same conduct from a firm without market power would not; however, when firms without market power engage in the same business practice it demands that antitrust analysts seriously consider the efficiency implications of the practice.  In other words, there is nothing in the mantra that things are “different” when larger firms do them that undercut potential efficiency explanations.

Examining Search Engine “Bias” on Bing

For queries within the larger sample, Bing refers to Microsoft content within its Top 1 and 3 results when no other engine similarly references this content for a slightly smaller percentage of queries than in my Edelman & Lockwood replication.  Yet Bing continues to exhibit a strong tendency to rank Microsoft content more prominently than rival engines.  For example, when Bing refers to Microsoft content within its Top 5 results, other engines agree with this ranking for less than 2% of queries; and Bing refers to Microsoft content that no other engine does within its Top 3 results for 99.2% of queries:

Regression analysis further illustrates Bing’s propensity to reference Microsoft content that rivals do not.  The following table reports the likelihood that Microsoft content is referred to in a Bing search as compared to searches on rival engines (again reported in odds ratios).

Bing refers to Microsoft content in its first results position about 56 times more often than rival engines refer to Microsoft content in this same position.  Across the entire first page, Microsoft content appears on a Bing search about 25 times more often than it does on any other engine.  Both Google and Blekko are accordingly significantly less likely to reference Microsoft content.  Notice further that, contrary to the findings in the smaller study, Google is slightly less likely to return Microsoft content than is Blekko, both in its first results position and across its entire first page.

A Closer Look at Google v. Bing

 Consistent with the smaller sample, I find again that Bing is more biased than Google using these metrics.  In other words, Bing ranks its own content significantly more highly than its rivals do more frequently then Google does, although the discrepancy between the two engines is smaller here than in the study of Edelman & Lockwood’s queries.  As noted above, Bing is over twice as likely to refer to own content in first results position than is Google.

Figures 7 and 8 present the same data reported above, but with Blekko removed, to allow for a direct visual comparison of own-content bias between Google and Bing.

Consistent with my earlier results, Bing appears to consistently rank Microsoft content higher than Google ranks the same (Microsoft) content more frequently than Google ranks Google content more prominently than Bing ranks the same (Google) content.

This result is particularly interesting given the strength of the accusations condemning Google for behaving in precisely this way.  That Bing references Microsoft content just as often as—and frequently even more often than!—Google references its own content strongly suggests that this behavior is a function of procompetitive product differentiation, and not abuse of market power.  But I’ll save an in-depth analysis of this issue for my next post, where I’ll also discuss whether any of the results reported in this series of posts support anticompetitive foreclosure theories or otherwise suggest antitrust intervention is warranted.

In my last post, I discussed Edelman & Lockwood’s (E&L’s) attempt to catch search engines in the act of biasing their results—as well as their failure to actually do so.  In this post, I present my own results from replicating their study.  Unlike E&L, I find that Bing is consistently more biased than Google, for reasons discussed further below, although neither engine references its own content as frequently as E&L suggest.

I ran searches for E&L’s original 32 non-random queries using three different search engines—Google, Bing, and Blekko—between June 23 and July 5 of this year.  This replication is useful, as search technology has changed dramatically since E&L recorded their results in August 2010.  Bing now powers Yahoo, and Blekko has had more time to mature and enhance its results.  Blekko serves as a helpful “control” engine in my study, as it is totally independent of Google and Microsoft, and so has no incentive to refer to Google or Microsoft content unless it is actually relevant to users.  In addition, because Blekko’s model is significantly different than Google and Microsoft’s, if results on all three engines agree that specific content is highly relevant to the user query, it lends significant credibility to the notion that the content places well on the merits rather than being attributable to bias or other factors.

How Do Search Engines Rank Their Own Content?

Focusing solely upon the first position, Google refers to its own products or services when no other search engine does in 21.9% of queries; in another 21.9% of queries, both Google and at least one other search engine rival (i.e. Bing or Blekko) refer to the same Google content with their first links.

But restricting focus upon the first position is too narrow.  Assuming that all instances in which Google or Bing rank their own content first and rivals do not amounts to bias would be a mistake; such a restrictive definition would include cases in which all three search engines rank the same content prominently—agreeing that it is highly relevant—although not all in the first position. 

The entire first page of results provides a more informative comparison.  I find that Google and at least one other engine return Google content on the first page of results in 7% of the queries.  Google refers to its own content on the first page of results without agreement from either rival search engine in only 7.9% of the queries.  Meanwhile, Bing and at least one other engine refer to Microsoft content in 3.2% of the queries.  Bing references Microsoft content without agreement from either Google or Blekko in 13.2% of the queries:

This evidence indicates that Google’s ranking of its own content differs significantly from its rivals in only 7.9% of queries, and that when Google ranks its own content prominently it is generally perceived as relevant.  Further, these results suggest that Bing’s organic search results are significantly more biased in favor of Microsoft content than Google’s search results are in favor of Google’s content.

Examining Search Engine “Bias” on Google

The following table presents the percentages of queries for which Google’s ranking of its own content differs significantly from its rivals’ ranking of that same content.

Note that percentages below 50 in this table indicate that rival search engines generally see the referenced Google content as relevant and independently believe that it should be ranked similarly.

So when Google ranks its own content highly, at least one rival engine typically agrees with this ranking; for example, when Google places its own content in its Top 3 results, at least one rival agrees with this ranking in over 70% of queries.  Bing especially agrees with Google’s rankings of Google content within its Top 3 and 5 results, failing to include Google content that Google ranks similarly in only a little more than a third of queries.

Examining Search Engine “Bias” on Bing

Bing refers to Microsoft content in its search results far more frequently than its rivals reference the same Microsoft content.  For example, Bing’s top result references Microsoft content for 5 queries, while neither Google nor Blekko ever rank Microsoft content in the first position:

This table illustrates the significant discrepancies between Bing’s treatment of its own Microsoft content relative to Google and Blekko.  Neither rival engine refers to Microsoft content Bing ranks within its Top 3 results; Google and Blekko do not include any Microsoft content Bing refers to on the first page of results in nearly 80% of queries.

Moreover, Bing frequently ranks Microsoft content highly even when rival engines do not refer to the same content at all in the first page of results.  For example, of the 5 queries for which Bing ranks Microsoft content in its top result, Google refers to only one of these 5 within its first page of results, while Blekko refers to none.  Even when comparing results across each engine’s full page of results, Google and Blekko only agree with Bing’s referral of Microsoft content in 20.4% of queries.

Although there are not enough Bing data to test results in the first position in E&L’s sample, Microsoft content appears as results on the first page of a Bing search about 7 times more often than Microsoft content appears on the first page of rival engines.  Also, Google is much more likely to refer to Microsoft content than Blekko, though both refer to significantly less Microsoft content than Bing.

A Closer Look at Google v. Bing

On E&L’s own terms, Bing results are more biased than Google results; rivals are more likely to agree with Google’s algorithmic assessment (than with Bing’s) that its own content is relevant to user queries.  Bing refers to Microsoft content other engines do not rank at all more often than Google refers its own content without any agreement from rivals.  Figures 1 and 2 display the same data presented above in order to facilitate direct comparisons between Google and Bing.

As Figures 1 and 2 illustrate, Bing search results for these 32 queries are more frequently “biased” in favor of its own content than are Google’s.  The bias is greatest for the Top 1 and Top 3 search results.

My study finds that Bing exhibits far more “bias” than E&L identify in their earlier analysis.  For example, in E&L’s study, Bing does not refer to Microsoft content at all in its Top 1 or Top 3 results; moreover, Bing refers to Microsoft content within its entire first page 11 times, while Google and Yahoo refer to Microsoft content 8 and 9 times, respectively.  Most likely, the significant increase in Bing’s “bias” differential is largely a function of Bing’s introduction of localized and personalized search results and represents serious competitive efforts on Bing’s behalf.

Again, it’s important to stress E&L’s limited and non-random sample, and to emphasize the danger of making strong inferences about the general nature or magnitude of search bias based upon these data alone.  However, the data indicate that Google’s own-content bias is relatively small even in a sample collected precisely to focus upon the queries most likely to generate it.  In fact—as I’ll discuss in my next post—own-content bias occurs even less often in a more representative sample of queries, strongly suggesting that such bias does not raise the competitive concerns attributed to it.

Last week I linked to my new study on “search bias.”  At the time I noted I would have a few blog posts in the coming days discussing the study.  This is the first of those posts.

A lot of the frenzy around Google turns on “search bias,” that is, instances when Google references its own links or its own content (such as Google Maps or YouTube) in its search results pages.  Some search engine critics condemn such references as inherently suspect and almost by their very nature harmful to consumers.  Yet these allegations suffer from several crucial shortcomings.  As I’ve noted (see, e.g., here and here), these naked assertions of discrimination are insufficient to state a cognizable antitrust claim, divorced as they are from consumer welfare analysis.  Indeed, such “discrimination” (some would call it “vertical integration”) has a well-recognized propensity to yield either pro-competitive or competitively neutral outcomes, rather than concrete consumer welfare losses.  Moreover, because search engines exist in an incredibly dynamic environment, marked by constant innovation and fierce competition, we would expect different engines, utilizing different algorithms and appealing to different consumer preferences, to emerge.  So when search engines engage in product differentiation of this sort, there is no reason to be immediately suspicious of these business decisions.

No reason to be immediately suspicious – but there could, conceivably, be a problem.  If there is, we would want to see empirical evidence of it—of both the existence of bias, as well as the consumer harm emanating from it.  But one of the most notable features of this debate is the striking lack of empirical data.  Surprisingly little research has been done in this area, despite frequent assertions that own-content bias is commonly practiced and poses a significant threat to consumers (see, e.g., here).

My paper is an attempt to rectify this.  In the paper, I investigate the available data to determine whether and to what extent own-content bias actually occurs, by analyzing and replicating a study by Ben Edelman and Ben Lockwood (E&L) and conducting my own study of a larger, randomized set of search queries.

In this post I discuss my analysis and critique of E&L; in future posts I’ll present my own replication of their study, as well as the results of my larger study of 1,000 random search queries.  Finally, I’ll analyze whether any of these findings support anticompetitive foreclosure theories or are otherwise sufficient to warrant antitrust intervention.

E&L “investigate . . . [w]hether search engines’ algorithmic results favor their own services, and if so, which search engines do most, to what extent, and in what substantive areas.”  Their approach is to measure the difference in how frequently search engines refer to their own content relative to how often their rivals do so.

One note at the outset:  While this approach provides useful descriptive facts about the differences between how search engines link to their own content, it does little to inform antitrust analysis because Edelman and Lockwood begin with the rather odd claim that competition among differentiated search engines for consumers is a puzzle that creates an air of suspicion around the practice—in fact, they claim that “it is hard to see why results would vary . . . across search engines.”  This assertion, of course, is simply absurd.  Indeed, Danny Sullivan provides a nice critique of this claim:

It’s not hard to see why search engine result differ at all.  Search engines each use their own “algorithm” to cull through the pages they’ve collected from across the web, to decide which pages to rank first . . . . Google has a different algorithm than Bing.  In short, Google will have a different opinion than Bing.  Opinions in the search world, as with the real world, don’t always agree.

Moreover, this assertion completely discounts both the vigorous competitive product differentiation that occurs in nearly all modern product markets as well as the obvious selection effects at work in own-content bias (Google users likely prefer Google content).  This combination detaches E&L’s analysis from the consumer welfare perspective, and thus antitrust policy relevance, despite their claims to the contrary (and the fact that their results actually exhibit very little bias).

Several methodological issues undermine the policy relevance of E&L’s analysis.  First, they hand select 32 search queries and execute searches on Google, Bing, Yahoo, AOL and Ask.  This hand-selected non-random sample of 32 search queries cannot generate reliable inferences regarding the frequency of bias—a critical ingredient to understanding its potential competitive effects.  Indeed, E&L acknowledge their queries are chosen precisely because they are likely to return results including Google content (e.g., email, images, maps, video, etc.).

E&L analyze the top three organic search results for each query on each engine.  They find that 19% of all results across all five search engines refer to content affiliated with one of them.  They focus upon the first three organic results and report that Google refers to its own content in the first (“top”) position about twice as often as Yahoo and Bing refer to Google content in this position.  Additionally, they note that Yahoo is more biased than Google when evaluating the first page rather than only the first organic search result.

E&L also offer a strained attempt to deal with the possibility of competitive product differentiation among search engines.  They examine differences among search engines’ references to their own content by “compar[ing] the frequency with which a search engine links to its own pages, relative to the frequency with which other search engines link to that search engine’s pages.”  However, their evidence undermines claims that Google’s own-content bias is significant and systematic relative to its rivals’.  In fact, almost zero evidence of statistically significant own-content bias by Google emerges.

E&L find, in general, Google is no more likely to refer to its own content than other search engines are to refer to that same content, and across the vast majority of their results, E&L find Google search results are not statistically more likely to refer to Google content than rivals’ search results.

The same data can be examined to test the likelihood that a search engine will refer to content affiliated with a rival search engine.  Rather than exhibiting bias in favor of an engine’s own content, a “biased” search engine might conceivably be less likely to refer to content affiliated with its rivals.  The table below reports the likelihood (in odds ratios) that a search engine’s content appears in a rival engine’s results.

The first two columns of the table demonstrate that both Google and Yahoo content are referred to in the first search result less frequently in rivals’ search results than in their own.  Although Bing does not have enough data for robust analysis of results in the first position in E&L’s original analysis, the next three columns in Table 1 illustrate that all three engines’ (Google, Yahoo, and Bing) content appears less often on the first page of rivals’ search results than on their own search engine.  However, only Yahoo’s results differ significantly from 1.  As between Google and Bing, the results are notably similar.

E&L also make a limited attempt to consider the possibility that favorable placement of a search engine’s own content is a response to user preferences rather than anticompetitive motives.  Using click-through data, they find, unsurprisingly, that the first search result tends to receive the most clicks (72%, on average).  They then identify one search term for which they believe bias plays an important role in driving user traffic.  For the search query “email,” Google ranks its own Gmail first and Yahoo Mail second; however, E&L also find that Gmail receives only 29% of clicks while Yahoo Mail receives 54%.  E&L claim that this finding strongly indicates that Google is engaging in conduct that harms users and undermines their search experience.

However, from a competition analysis perspective, that inference is not sound.  Indeed, the fact that the second-listed Yahoo Mail link received the majority of clicks demonstrates precisely that Yahoo was not competitively foreclosed from access to users.  Taken collectively, E&L are not able to muster evidence of potential competitive foreclosure.

While it’s important to have an evidence-based discussion surrounding search engine results and their competitive implications, it’s also critical to recognize that bias alone is not evidence of competitive harm.  Indeed, any identified bias must be evaluated in the appropriate antitrust economic context of competition and consumers, rather than individual competitors and websites.  E&L’s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content.  But, taken at face value, their results actually demonstrate little or no evidence of bias—let alone that the little bias they do find is causing any consumer harm.

As I’ll discuss in coming posts, evidence gathered since E&L conducted their study further suggests their claims that bias is prevalent, inherently harmful, and sufficient to warrant antitrust intervention are overstated and misguided.

Former TOTM blog symposium participant Joshua Gans (visiting Microsoft Research) has a post at TAP on l’affair hiybbprqag, about which I blogged previously here.

Gans notes, as I did, that Microsoft is not engaged in wholesale copying of Google’s search results, even though doing so would be technologically feasible.  But Gans goes on to draw a normative conclusion:

Let’s start with “imitation,” “copying” and its stronger variants of “plagiarism” and “cheating.” Had Bing wanted to do this and directly map Google’s search results onto its own, it could have done it. It could have set up programs to enter terms in Google and skimmed off the results and then used them directly. And I think we can all agree that that is wrong. Why? Two reasons. First, if Google has invested to produce those results, if others can just hang off them and copy it, Google’s may not earn the return on its efforts it should do. Second, if Bing were doing this and representing itself as a different kind of search, then that misrepresentation would be misleading. Thus, imitation reduces Google’s reward for innovation while adding no value in terms of diversity.

His first reason why this would be wrong is . . . silly.  I mean, I don’t want to get into a moral debate, but since when is it wrong to engage in activity that “may” hamper another firm’s ability to earn the return on its effort that it “should” (whatever “should” means here)?  I always thought that was called “competition” and we encouraged it.  As I noted the other day, competition via imitation is an important part of Schumpeterian capitalism.  To claim that reducing another company’s profits via imitation is wrong, but doing so via innovation is good and noble, is to hang one’s hat on a distinction that does not really exist.

The second argument, that doing so would amount to misrepresentation, is possible, but I’m sure if Microsoft were actually just copying Google’s results their representations would look different than they do now and the problem would probably not exist, so this claim is speculative, at best.

Now, regardless, I doubt it would be profitable for Microsoft to copy Google wholesale, and this is basically just a red herring (as Gans understands–he goes on to discuss the more “innocuous” imitation at issue).  While I think Gans’ claims that it would be “wrong” are just hand waiving, I am confident it would be “wrong” from the point of view of Microsoft’s bottom line–or else they would already be doing it.  In this context, that would seem to be the only standard that matters, unless there were a legal basis for the claim.

On this score, Gans points us to Shane Greenstein (Kellogg).  Greenstein writes:

Let’s start with a weak standard, the law. Legally speaking, imitation is allowed so long as a firm does not violate laws governing patents, copyright, or trade secrets. Patents obviously do not apply to this situation, and neither does copyright  because Google does not get a copyright on a search result. It also does not appear as if Googles trade secrets were violated. So, generally speaking, it does not appear as if any law has been broken.

This is all well and good, but Greenstein goes on to engage in his own casual moralizing, and his comments are worth reproducing (imitating?) at some length:

The norms of rivalry

There is nothing wrong with one retailer walking through a rival’s shop and getting ideas for what to do. There is really nothing wrong with a designer of a piece of electronic equipment buying a rival’s product and studying it in order to get new ideas for a  better design. 

In the modern Internet, however, there is no longer any privacy for users. Providers want to know as much as they can, and generally the rich suppliers can learn quite a lot about user conduct and preferences.

That means that rivals can learn a great deal about how users conduct their business, even when they are at a rival’s site. It is as if one retailer had a camera in a rival’s store, or one designer could learn the names of the buyer’s of their rival’s products, and interview them right away.

In the offline world, such intimate familiarity with a rival’s users and their transactions would be uncomfortable. It would seem like an intrusion on the transaction between user and supplier. Why is it permissible in the online world? Why is there any confusion about this being an intrusion in the online world? Why isn’t Microsoft’s behavior seen — cut and dry — as an intrusion?

In other words, the transaction between supplier and user is between supplier and user, and nobody else should be able to observe it without permission of both supplier and user. The user alone does not have the right or ability to invite another party to observe all aspects of the transaction.

That is what bothers me about Bing’s behavior. There is nothing wrong with them observing users, but they are doing more than just that. They are observing their rival’s transaction with users. And learning from it. In other contexts that would not be allowed without explicit permission of both parties — both user and supplier.

Moreover, one party does not like it in this case, as they claim the transaction with users as something they have a right to govern and keep to themselves. There is some merit in that claim.

In most contexts it seems like the supplier’s wishes should be respected. Why not online? (emphasis mine)

Where on Earth do these moral standards come from?  In what way is it not “allowed” (whatever that means here) for a firm to observe and learn from a rival’s transactions with users?  I can see why the rival would prefer it to be otherwise, of course, but so what?  They would also prefer to eradicate their meddlesome rival entirely, if possible (hence Microsoft’s considerable engagement with antitrust authorities concerning Google’s business), but we hardly elevate such desires to the realm of the moral.

What I find most troublesome is the controlling, regulatory mindset implicit in these analyses.  Here’s Gans again:

Outright imitation of this type should be prohibited but what do we call some more innocuous types? Just look at how the look and feel of the iPhone has been adopted by some mobile software developers just as the consumer success of graphic based interfaces did in an earlier time. This certainly reduces Apple’s reward for its innovations but the hit on diversity is murkier because while some features are common, competitors have tried to differentiate themselves. So this is not imitation but it is something more common, leveraging without compensation and how you feel about it depends on just how much reward you think pioneers should receive.

It is usually politicians and not economists (other than politico-economists like Krugman) who think they have a handle on–and an obligation to do something about–things like “how much reward . . .pioneers should receive.”  I would have thought the obvious answer to the question would be either “the optimal amount, but good luck knowing what that is or expecting to find it in the real world,” or else, for the Second Best, “whatever the market gives them.”  The implication that there is some moral standard appreciable by human mortals, or even human economists, is a recipe for disaster.

One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims by Google that Microsoft has been copying its answers–using Google search results to bolster the relevance of its own results for certain search terms.  The full story from Internet search journalist extraordinaire, Danny Sullivan, is here, with a follow up discussing Microsoft’s response here.  The New York Times is also on the case with some interesting comments from a former Googler that feed nicely into the Schumpeterian competition angle (discussed below).  And Microsoft consultant (“though on matters unrelated to issues discussed here”)  and Harvard Business prof Ben Edelman coincidentally echoes precisely Microsoft’s response in a blog post here.

What I find so great about this story is how it seems to resolve one of the most significant strands of the ongoing debate–although it does so, from Microsoft’s point of view, unintentionally, to be sure.

Here’s what I mean.  Back when Microsoft first started being publicly identified as a significant instigator of regulatory and antitrust attention paid to Google, the company, via its chief competition counsel, Dave Heiner, defended its stance in large part on the following ground:

All of this is quite important because search is so central to how people navigate the Internet, and because advertising is the main monetization mechanism for a wide range of Web sites and Web services. Both search and online advertising are increasingly controlled by a single firm, Google. That can be a problem because Google’s business is helped along by significant network effects (just like the PC operating system business). Search engine algorithms “learn” by observing how users interact with search results. Google’s algorithms learn less common search terms better than others because many more people are conducting searches on these terms on Google.

These and other network effects make it hard for competing search engines to catch up. Microsoft’s well-received Bing search engine is addressing this challenge by offering innovations in areas that are less dependent on volume. But Bing needs to gain volume too, in order to increase the relevance of search results for less common search terms. That is why Microsoft and Yahoo! are combining their search volumes. And that is why we are concerned about Google business practices that tend to lock in publishers and advertisers and make it harder for Microsoft to gain search volume. (emphasis added).

Claims of “network effects” “increasing returns to scale” and the absence of “minimum viable scale” for competitors run rampant (and unsupported) in the various cases against Google.  The TradeComet complaint, for example, claims that

[t]he primary barrier to entry facing vertical search websites is the inability to draw enough search traffic to reach the critical mass necessary to become independently sustainable.

But now we discover (what we should have known all along) that “learning by doing” is not the only way to obtain the data necessary to generate relevant search results: “Learning by copying” works, as well.  And there’s nothing wrong with it–in fact, the very process of Schumpeterian creative destruction assumes imitation.

As Armen Alchian notes in describing his evolutionary process of competition,

Neither perfect knowledge of the past nor complete awareness of the current state of the arts gives sufficient foresight to indicate profitable action . . . [and] the pervasive effects of uncertainty prevent the ascertainment of actions which are supposed to be optimal in achieving profits.  Now the consequence of this is that modes of behavior replace optimum equilibrium conditions as guiding rules of action. First, wherever successful enterprises are observed, the elements common to these observable successes will be associated with success and copied by others in their pursuit of profits or success. “Nothing succeeds like success.”

So on the one hand, I find the hand wringing about Microsoft’s “copying” Google’s results to be completely misplaced–just as the pejorative connotations of “embrace and extend” deployed against Microsoft itself when it was the target of this sort of scrutiny were bogus.  But, at the same time, I see this dynamic essentially decimating Microsoft’s (and others’) claims that Google has an unassailable position because no competitor can ever hope to match its size, and thus its access to information essential to the quality of search results, particularly when it comes to so-called “long-tail” search terms.

Long-tail search terms are queries that are extremely rare and, thus, for which there is little user history (information about which results searchers found relevant and clicked on) to guide future search results.  As Ben Edelman writes in his blog post (linked above) on this issue (trotting out, even while implicitly undercutting, the “minimum viable scale” canard):

Of course the reality is that Google’s high market share means Google gets far more searches than any other search engine. And Google’s popularity gives it a real advantage: For an obscure search term that gets 100 searches per month at Google, Bing might get just five or 10. Also, for more popular terms, Google can slice its data into smaller groups — which results are most useful to people from Boston versus New York, which results are best during the day versus at night, and so forth. So Google is far better equipped to figure out what results users favor and to tailor its listings accordingly. Meanwhile, Microsoft needs additional data, such as Toolbar and Related Sites data, to attempt to improve its results in a similar way.

But of course the “additional data” that Microsoft has access to here is, to a large extent, the same data that Google has.  Although Danny Sullivan’s follow up story (also linked above) suggests that Bing doesn’t do all it could to make use of Google’s data (for example, Bing does not, it seems, copy Google search results wholesale, nor does it use user behavior as extensively as it could (by, for example, seeing searches in Google and then logging the next page visited, which would give Bing a pretty good idea which sites in Google’s results users found most relevant)), it doesn’t change the fundamental fact that Microsoft and other search engines can overcome a significant amount of the so-called barrier to entry afforded by Google’s impressive scale by simply imitating much of what Google does (and, one hopes, also innovating enough to offer something better).

Perhaps Google is “better equipped to figure out what users favor.”  But it seems to me that only a trivial amount of this advantage is plausibly attributable to Google’s scale instead of its engineering and innovation.  The fact that Microsoft can (because of its own impressive scale in various markets) and does take advantage of accessible data to benefit indirectly from Google’s own prowess in search is a testament to the irrelevance of these unfortunately-pervasive scale and network effect arguments.