Archives For Danny Sullivan

Six months may not seem a great deal of time in the general business world, but in the Internet space it’s a lifetime as new websites, tools and features are introduced every day that change where and how users get and share information. The rise of Facebook is a great example: the social networking platform that didn’t exist in early 2004 filed paperwork last month to launch what is expected to be one of the largest IPOs in history. To put it in perspective, Ford Motor went public nearly forty years after it was founded.

This incredible pace of innovation is seen throughout the Internet, and since Google’s public disclosure of its Federal Trade Commission antitrust investigation just this past June, there have been many dynamic changes to the landscape of the Internet Search market. And as the needs and expectations of consumers continue to evolve, Internet search must adapt – and quickly – to shifting demand.

One noteworthy development was the release of Siri by Apple, which was introduced to the world in late 2011 on the most recent iPhone. Today, many consider it the best voice recognition application in history, but its potential really lies in its ability revolutionize the way we search the Internet, answer questions and consume information. As Eric Jackson of Forbes noted, in the future it may even be a “Google killer.”

Of this we can be certain: Siri is the latest (though certainly not the last) game changer in Internet search, and it has certainly begun to change people’s expectations about both the process and the results of search. The search box, once needed to connect us with information on the web, is dead or dying. In its place is an application that feels intuitive and personal. Siri has become a near-indispensible entry point, and search engines are merely the back-end. And while a new feature, Siri’s expansion is inevitable. In fact, it is rumored that Apple is diligently working on Siri-enabled televisions – an entirely new market for the company.

The past six months have also brought the convergence of social media and search engines, as first Bing and more recently Google have incorporated information from a social network into their search results. Again we see technology adapting and responding to the once-unimagined way individuals find, analyze and accept information. Instead of relying on traditional, mechanical search results and the opinions of strangers, this new convergence allows users to find data and receive input directly from people in their social world, offering results curated by friends and associates.

As Social networks become more integrated with the Internet at large, reviews from trusted contacts will continue to change the way that users search for information. As David Worlock put it in a post titled, “Decline and Fall of the Google Empire,” “Facebook and its successors become the consumer research environment. Search by asking someone you know, or at least have a connection with, and get recommendations and references which take you right to the place where you buy.” The addition of social data to search results lends a layer of novel, trusted data to users’ results. Search Engine Land’s Danny Sullivan agreed writing, “The new system will perhaps make life much easier for some people, allowing them to find both privately shared content from friends and family plus material from across the web through a single search, rather than having to search twice using two different systems.”It only makes sense, from a competition perspective, that Google followed suit and recently merged its social and search data in an effort to make search more relevant and personal.

Inevitably, a host of Google’s critics and competitors has cried foul. In fact, as Google has adapted and evolved from its original template to offer users not only links to URLs but also maps, flight information, product pages, videos and now social media inputs, it has met with a curious resistance at every turn. And, indeed, judged against a world in which Internet search is limited to “ten blue links,” with actual content – answers to questions – residing outside of Google’s purview, it has significantly expanded its reach and brought itself (and its large user base) into direct competition with a host of new entities.

But the worldview that judges these adaptations as unwarranted extensions of Google’s platform from its initial baseline, itself merely a function of the relatively limited technology and nascent consumer demand present at the firm’s inception, is dangerously crabbed. By challenging Google’s evolution as “leveraging its dominance” into new and distinct markets, rather than celebrating its efforts (and those of Apple, Bing and Facebook, for that matter) to offer richer, more-responsive and varied forms of information, this view denies the essential reality of technological evolution and exalts outdated technology and outmoded business practices.

And while Google’s forays into the protected realms of others’ business models grab the headlines, it is also feverishly working to adapt its core technology, as well, most recently (and ambitiously) with its “Google Knowledge Graph” project, aimed squarely at transforming the algorithmic guts of its core search function into something more intelligent and refined than its current word-based index permits. In concept, this is, in fact, no different than its efforts to bootstrap social network data into its current structure: Both are efforts to improve on the mechanical process built on Google’s PageRank technology to offer more relevant search results informed by a better understanding of the mercurial way people actually think.

Expanding consumer welfare requires that Google, like its ever-shifting roster of competitors, must be able to keep up with the pace and the unanticipated twists and turns of innovation. As The Economist recently said, “Kodak was the Google of its day,” and the analogy is decidedly apt. Without the drive or ability to evolve and reinvent itself, its products and its business model, Kodak has fallen to its competitors in the marketplace. Once revered as a powerhouse of technological innovation for most of its history, Kodak now faces bankruptcy because it failed to adapt to its own success. Having invented the digital camera, Kodak radically altered the very definition of its market. But by hewing to its own metaphorical ten blue links – traditional film – instead of understanding that consumer photography had come to mean something dramatically different, Kodak consigned itself to failure.

Like Kodak and every other technology company before it, Google must be willing and able to adapt and evolve; just as for Lewis Carol’s Red Queen, “here it takes all the running you can do, to keep in the same place.” Neither consumers nor firms are well served by regulatory policy informed by nostalgia. Even more so than Kodak, Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters. If regulators force it to stop running, the market will simply pass it by.

[Cross posted at Forbes]

By Berin Szoka, Geoffrey Manne & Ryan Radia

As has become customary with just about every new product announcement by Google these days, the company’s introduction on Tuesday of its new “Search, plus Your World” (SPYW) program, which aims to incorporate a user’s Google+ content into her organic search results, has met with cries of antitrust foul play. All the usual blustering and speculation in the latest Google antitrust debate has obscured what should, however, be the two key prior questions: (1) Did Google violate the antitrust laws by not including data from Facebook, Twitter and other social networks in its new SPYW program alongside Google+ content; and (2) How might antitrust restrain Google in conditioning participation in this program in the future?

The answer to the first is a clear no. The second is more complicated—but also purely speculative at this point, especially because it’s not even clear Facebook and Twitter really want to be included or what their price and conditions for doing so would be. So in short, it’s hard to see what there is to argue about yet.

Let’s consider both questions in turn.

Should Google Have Included Other Services Prior to SPYW’s Launch?

Google says it’s happy to add non-Google content to SPYW but, as Google fellow Amit Singhal told Danny Sullivan, a leading search engine journalist:

Facebook and Twitter and other services, basically, their terms of service don’t allow us to crawl them deeply and store things. Google+ is the only [network] that provides such a persistent service,… Of course, going forward, if others were willing to change, we’d look at designing things to see how it would work.

In a follow-up story, Sullivan quotes his interview with Google executive chairman Eric Schmidt about how this would work:

“To start with, we would have a conversation with them,” Schmidt said, about settling any differences.

I replied that with the Google+ suggestions now hitting Google, there was no need to have any discussions or formal deals. Google’s regular crawling, allowed by both Twitter and Facebook, was a form of “automated conversation” giving Google material it could use.

“Anything we do with companies like that, it’s always better to have a conversion,” Schmidt said.

MG Siegler calls this “doublespeak” and seems to think Google violated the antitrust laws by not making SPYW more inclusive right out of the gate. He insists Google didn’t need permission to include public data in SPYW:

Both Twitter and Facebook have data that is available to the public. It’s data that Google crawls. It’s data that Google even has some social context for thanks to older Google Profile features, as Sullivan points out.

It’s not all the data inside the walls of Twitter and Facebook — hence the need for firehose deals. But the data Google can get is more than enough for many of the high level features of Search+ — like the “People and Places” box, for example.

It’s certainly true that if you search Google for “site:twitter.com” or “site:facebook.com,” you’ll get billions of search results from publicly-available Facebook and Twitter pages, and that Google already has some friend connection data via social accounts you might have linked to your Google profile (check out this dashboard), as Sullivan notes. But the public data isn’t available in real-time, and the private, social connection data is limited and available only for users who link their accounts. For Google to access real-time results and full social connection data would require… you guessed it… permission from Twitter (or Facebook)! As it happens, Twitter and Google had a deal for a “data firehose” so that Google could display tweets in real-time under the “personalized search” program for public social information that SPYW builds on top of. But Twitter ended the deal last May for reasons neither company has explained.

At best, therefore, Google might have included public, relatively stale social information from Twitter and Facebook in SPYW—content that is, in any case, already included in basic search results and remains available there. The real question, however, isn’t could Google have included this data in SPYW, but rather need they have? If Google’s engineers and executives decided that the incorporation of this limited data would present an inconsistent user experience or otherwise diminish its uniquely new social search experience, it’s hard to fault the company for deciding to exclude it. Moreover, as an antitrust matter, both the economics and the law of anticompetitive product design are uncertain. In general, as with issues surrounding the vertical integration claims against Google, product design that hurts rivals can (it should be self-evident) be quite beneficial for consumers. Here, it’s difficult to see how the exclusion of non-Google+ social media from SPYW could raise the costs of Google’s rivals, result in anticompetitive foreclosure, retard rivals’ incentives for innovation, or otherwise result in anticompetitive effects (as required to establish an antitrust claim).

Further, it’s easy to see why Google’s lawyers would prefer express permission from competitors before using their content in this way. After all, Google was denounced last year for “scraping” a different type of social content, user reviews, most notably by Yelp’s CEO at the contentious Senate antitrust hearing in September. Perhaps one could distinguish that situation from this one, but it’s not obvious where to draw the line between content Google has a duty to include without “making excuses” about needing permission and content Google has a duty not to include without express permission. Indeed, this seems like a case of “damned if you do, damned if you don’t.” It seems only natural for Google to be gun-shy about “scraping” other services’ public content for use in its latest search innovation without at least first conducting, as Eric Schmidt puts it, a “conversation.”

And as we noted, integrating non-public content would require not just permission but active coordination about implementation. SPYW displays Google+ content only to users who are logged into their Google+ account. Similarly, to display content shared with a user’s friends (but not the world) on Facebook, or protected tweets, Google would need a feed of that private data and a way of logging the user into his or her account on those sites.

Now, if Twitter truly wants Google to feature tweets in Google’s personalized search results, why did Twitter end its agreement with Google last year? Google responded to Twitter’s criticism of its SPYW launch last night with a short Google+ statement:

We are a bit surprised by Twitter’s comments about Search plus Your World, because they chose not to renew their agreement with us last summer, and since then we have observed their rel=nofollow instructions [by removing Twitter content results from “personalized search” results].

Perhaps Twitter simply got a better deal: Microsoft may have paid Twitter $30 million last year for a similar deal allowing Bing users to receive Twitter results. If Twitter really is playing hardball, Google is not guilty of discriminating against Facebook and Twitter in favor of its own social platform. Rather, it’s simply unwilling to pony up the cash that Facebook and Twitter are demanding—and there’s nothing illegal about that.

Indeed, the issue may go beyond a simple pricing dispute. If you were CEO of Twitter or Facebook, would you really think it was a net-win if your users could use Google search as an interface for your site? After all, these social networking sites are in an intense war for eyeballs: the more time users spend on Google, the more ads Google can sell, to the detriment of Facebook or Twitter. Facebook probably sees itself increasingly in direct competition with Google as a tool for finding information. Its social network has vastly more users than Google+ (800 million v 62 million, but even larger lead in active users), and, in most respects, more social functionality. The one area where Facebook lags is search functionality. Would Facebook really want to let Google become the tool for searching social networks—one social search engine “to rule them all“? Or would Facebook prefer to continue developing “social search” in partnership with Bing? On Bing, it can control how its content appears—and Facebook sees Microsoft as a partner, not a rival (at least until it can build its own search functionality inside the web’s hottest property).

Adding to this dynamic, and perhaps ultimately fueling some of the fire against SPYW, is the fact that many Google+ users seem to be multi-homing, using both Facebook and Google+ (and other social networks) at the same time, and even using various aggregators and syncing tools (Start Google+, for example) to unify social media streams and share content among them. Before SPYW, this might have seemed like a boon to Facebook, staunching any potential defectors from its network onto Google+ by keeping them engaged with both, with a kind of “Facebook primacy” ensuring continued eyeball time on its site. But Facebook might see SPYW as a threat to this primacy—in effect, reversing users’ primary “home” as they effectively import their Facebook data into SPYW via their Google+ accounts (such as through Start Google+). If SPYW can effectively facilitate indirect Google searching of private Facebook content, the fears we suggest above may be realized, and more users may forego vistiing Facebook.com (and seeing its advertisers), accessing much of their Facebook content elsewhere—where Facebook cannot monetize their attention.

Amidst all the antitrust hand-wringing over SPYW and Google’s decision to “go it alone” for now, it’s worth noting that Facebook has remained silent. Even Twitter has said little more than a tweet’s worth about the issue. It’s simply not clear that Google’s rivals would even want to participate in SPYW. This could still be bad for consumers, but in that case, the source of the harm, if any, wouldn’t be Google. If this all sounds speculative, it is—and that’s precisely the point. No one really knows. So, again, what’s to argue about on Day 3 of the new social search paradigm?

The Debate to Come: Conditioning Access to SPYW

While Twitter and Facebook may well prefer that Google not index their content on SPYW—at least, not unless Google is willing to pay up—suppose the social networking firms took Google up on its offer to have a “conversation” about greater cooperation. Google hasn’t made clear on what terms it would include content from other social media platforms. So it’s at least conceivable that, when pressed to make good on its lofty-but-vague offer to include other platforms, Google might insist on unacceptable terms. In principle, there are essentially three possibilities here:

  1. Antitrust law requires nothing because there are pro-consumer benefits for Google to make SPYW exclusive and no clear harm to competition (as distinct from harm to competitors) for doing so, as our colleague Josh Wright argues.
  2. Antitrust law requires Google to grant competitors access to SPYW on commercially reasonable terms.
  3. Antitrust law requires Google to grant such access on terms dictated by its competitors, even if unreasonable to Google.

Door #3 is a legal non-starter. In Aspen Skiing v. Aspen Highlands (1985), the Supreme Court came the closest it has ever come to endorsing the “essential facilities” doctrine by which a competitor has a duty to offer its facilities to competitors. But in Verizon Communications v. Trinko (2004), the Court made clear that even Aspen Skiing is “at or near the outer boundary of § 2 liability.” Part of the basis for the decision in Aspen Skiing was the existence of a prior, profitable relationship between the “essential facility” in question and the competitor seeking access. Although the assumption is neither warranted nor sufficient (circumstances change, of course, and merely “profitable” is not the same thing as “best available use of a resource”), the Court in Aspen Skiing seems to have been swayed by the view that the access in question was otherwise profitable for the company that was denying it. Trinko limited the reach of the doctrine to the extraordinary circumstances of Aspen Skiing, and thus, as the Court affirmed in Pacific Bell v. LinkLine (2008), it seems there is no antitrust duty for a firm to offer access to a competitor on commercially unreasonable terms (as Geoff Manne discusses at greater length in his chapter on search bias in TechFreedom’s free ebook, The Next Digital Decade).

So Google either has no duty to deal at all, or a duty to deal only on reasonable terms. But what would a competitor have to show to establish such a duty? And how would “reasonableness” be defined?

First, this issue parallels claims made more generally about Google’s supposed “search bias.” As Josh Wright has said about those claims, “[p]roperly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.” Supposing (for the moment) that the second point could be established, it’s hard to see how Facebook or Twitter could really show that being excluded from SPYW—while still having their available content show up as it always has in Google’s “organic” search results—would actually “render their efforts to compete for distribution uneconomical,” which, as Josh explains, antitrust law would require them to show. Google+ is a tiny service compared to Google or Facebook. And even Google itself, for all the awe and loathing it inspires, lags in the critical metric of user engagement, keeping the average user on site for only a quarter as much time as Facebook.

Moreover, by these same measures, it’s clear that Facebook and Twitter don’t need access to Google search results at all, much less its relatively trivial SPYW results, in order find, and be found by, users; it’s difficult to know from what even vaguely relevant market they could possibly be foreclosed by their absence from SPYW results. Does SPYW potentially help Google+, to Facebook’s detriment? Yes. Just as Facebook’s deal with Microsoft hurts Google. But this is called competition. The world would be a desolate place if antitrust laws effectively prohibited firms from making decisions that helped themselves at their competitors’ expense.

After all, no one seems to be suggesting that Microsoft should be forced to include Google+ results in Bing—and rightly so. Microsoft’s exclusive partnership with Facebook is an important example of how a market leader in one area (Facebook in social) can help a market laggard in another (Microsoft in search) compete more effectively with a common rival (Google). In other words, banning exclusive deals can actually make it more difficult to unseat an incumbent (like Google), especially where the technologies involved are constantly evolving, as here.

Antitrust meddling in such arrangements, particularly in high-risk, dynamic markets where large up-front investments are frequently required (and lost), risks deterring innovation and reducing the very dynamism from which consumers reap such incredible rewards. “Reasonable” is a dangerously slippery concept in such markets, and a recipe for costly errors by the courts asked to define the concept. We suspect that disputes arising out of these sorts of deals will largely boil down to skirmishes over pricing, financing and marketing—the essential dilemma of new media services whose business models are as much the object of innovation as their technologies. Turning these, by little more than innuendo, into nefarious anticompetitive schemes is extremely—and unnecessarily—risky. Continue Reading…

Last week I linked to my new study on “search bias.”  At the time I noted I would have a few blog posts in the coming days discussing the study.  This is the first of those posts.

A lot of the frenzy around Google turns on “search bias,” that is, instances when Google references its own links or its own content (such as Google Maps or YouTube) in its search results pages.  Some search engine critics condemn such references as inherently suspect and almost by their very nature harmful to consumers.  Yet these allegations suffer from several crucial shortcomings.  As I’ve noted (see, e.g., here and here), these naked assertions of discrimination are insufficient to state a cognizable antitrust claim, divorced as they are from consumer welfare analysis.  Indeed, such “discrimination” (some would call it “vertical integration”) has a well-recognized propensity to yield either pro-competitive or competitively neutral outcomes, rather than concrete consumer welfare losses.  Moreover, because search engines exist in an incredibly dynamic environment, marked by constant innovation and fierce competition, we would expect different engines, utilizing different algorithms and appealing to different consumer preferences, to emerge.  So when search engines engage in product differentiation of this sort, there is no reason to be immediately suspicious of these business decisions.

No reason to be immediately suspicious – but there could, conceivably, be a problem.  If there is, we would want to see empirical evidence of it—of both the existence of bias, as well as the consumer harm emanating from it.  But one of the most notable features of this debate is the striking lack of empirical data.  Surprisingly little research has been done in this area, despite frequent assertions that own-content bias is commonly practiced and poses a significant threat to consumers (see, e.g., here).

My paper is an attempt to rectify this.  In the paper, I investigate the available data to determine whether and to what extent own-content bias actually occurs, by analyzing and replicating a study by Ben Edelman and Ben Lockwood (E&L) and conducting my own study of a larger, randomized set of search queries.

In this post I discuss my analysis and critique of E&L; in future posts I’ll present my own replication of their study, as well as the results of my larger study of 1,000 random search queries.  Finally, I’ll analyze whether any of these findings support anticompetitive foreclosure theories or are otherwise sufficient to warrant antitrust intervention.

E&L “investigate . . . [w]hether search engines’ algorithmic results favor their own services, and if so, which search engines do most, to what extent, and in what substantive areas.”  Their approach is to measure the difference in how frequently search engines refer to their own content relative to how often their rivals do so.

One note at the outset:  While this approach provides useful descriptive facts about the differences between how search engines link to their own content, it does little to inform antitrust analysis because Edelman and Lockwood begin with the rather odd claim that competition among differentiated search engines for consumers is a puzzle that creates an air of suspicion around the practice—in fact, they claim that “it is hard to see why results would vary . . . across search engines.”  This assertion, of course, is simply absurd.  Indeed, Danny Sullivan provides a nice critique of this claim:

It’s not hard to see why search engine result differ at all.  Search engines each use their own “algorithm” to cull through the pages they’ve collected from across the web, to decide which pages to rank first . . . . Google has a different algorithm than Bing.  In short, Google will have a different opinion than Bing.  Opinions in the search world, as with the real world, don’t always agree.

Moreover, this assertion completely discounts both the vigorous competitive product differentiation that occurs in nearly all modern product markets as well as the obvious selection effects at work in own-content bias (Google users likely prefer Google content).  This combination detaches E&L’s analysis from the consumer welfare perspective, and thus antitrust policy relevance, despite their claims to the contrary (and the fact that their results actually exhibit very little bias).

Several methodological issues undermine the policy relevance of E&L’s analysis.  First, they hand select 32 search queries and execute searches on Google, Bing, Yahoo, AOL and Ask.  This hand-selected non-random sample of 32 search queries cannot generate reliable inferences regarding the frequency of bias—a critical ingredient to understanding its potential competitive effects.  Indeed, E&L acknowledge their queries are chosen precisely because they are likely to return results including Google content (e.g., email, images, maps, video, etc.).

E&L analyze the top three organic search results for each query on each engine.  They find that 19% of all results across all five search engines refer to content affiliated with one of them.  They focus upon the first three organic results and report that Google refers to its own content in the first (“top”) position about twice as often as Yahoo and Bing refer to Google content in this position.  Additionally, they note that Yahoo is more biased than Google when evaluating the first page rather than only the first organic search result.

E&L also offer a strained attempt to deal with the possibility of competitive product differentiation among search engines.  They examine differences among search engines’ references to their own content by “compar[ing] the frequency with which a search engine links to its own pages, relative to the frequency with which other search engines link to that search engine’s pages.”  However, their evidence undermines claims that Google’s own-content bias is significant and systematic relative to its rivals’.  In fact, almost zero evidence of statistically significant own-content bias by Google emerges.

E&L find, in general, Google is no more likely to refer to its own content than other search engines are to refer to that same content, and across the vast majority of their results, E&L find Google search results are not statistically more likely to refer to Google content than rivals’ search results.

The same data can be examined to test the likelihood that a search engine will refer to content affiliated with a rival search engine.  Rather than exhibiting bias in favor of an engine’s own content, a “biased” search engine might conceivably be less likely to refer to content affiliated with its rivals.  The table below reports the likelihood (in odds ratios) that a search engine’s content appears in a rival engine’s results.

The first two columns of the table demonstrate that both Google and Yahoo content are referred to in the first search result less frequently in rivals’ search results than in their own.  Although Bing does not have enough data for robust analysis of results in the first position in E&L’s original analysis, the next three columns in Table 1 illustrate that all three engines’ (Google, Yahoo, and Bing) content appears less often on the first page of rivals’ search results than on their own search engine.  However, only Yahoo’s results differ significantly from 1.  As between Google and Bing, the results are notably similar.

E&L also make a limited attempt to consider the possibility that favorable placement of a search engine’s own content is a response to user preferences rather than anticompetitive motives.  Using click-through data, they find, unsurprisingly, that the first search result tends to receive the most clicks (72%, on average).  They then identify one search term for which they believe bias plays an important role in driving user traffic.  For the search query “email,” Google ranks its own Gmail first and Yahoo Mail second; however, E&L also find that Gmail receives only 29% of clicks while Yahoo Mail receives 54%.  E&L claim that this finding strongly indicates that Google is engaging in conduct that harms users and undermines their search experience.

However, from a competition analysis perspective, that inference is not sound.  Indeed, the fact that the second-listed Yahoo Mail link received the majority of clicks demonstrates precisely that Yahoo was not competitively foreclosed from access to users.  Taken collectively, E&L are not able to muster evidence of potential competitive foreclosure.

While it’s important to have an evidence-based discussion surrounding search engine results and their competitive implications, it’s also critical to recognize that bias alone is not evidence of competitive harm.  Indeed, any identified bias must be evaluated in the appropriate antitrust economic context of competition and consumers, rather than individual competitors and websites.  E&L’s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content.  But, taken at face value, their results actually demonstrate little or no evidence of bias—let alone that the little bias they do find is causing any consumer harm.

As I’ll discuss in coming posts, evidence gathered since E&L conducted their study further suggests their claims that bias is prevalent, inherently harmful, and sufficient to warrant antitrust intervention are overstated and misguided.

I did not intend for this to become a series (Part I), but I underestimated the supply of analysis simultaneously invoking “search bias” as an antitrust concept while waving it about untethered from antitrust’s institutional commitment to protecting consumer welfare.  Harvard Business School Professor Ben Edelman offers the latest iteration in this genre.  We’ve criticized his claims regarding search bias and antitrust on precisely these grounds.

For those who have not been following the Google antitrust saga, Google’s critics allege Google’s algorithmic search results “favor” its own services and products over those of rivals in some indefinite, often unspecified, improper manner.  In particular, Professor Edelman and others — including Google’s business rivals — have argued that Google’s “bias” discriminates most harshly against vertical search engine rivals, i.e. rivals offering search specialized search services.   In framing the theory that “search bias” can be a form of anticompetitive exclusion, Edelman writes:

Search bias is a mechanism whereby Google can leverage its dominance in search, in order to achieve dominance in other sectors.  So for example, if Google wants to be dominant in restaurant reviews, Google can adjust search results, so whenever you search for restaurants, you get a Google reviews page, instead of a Chowhound or Yelp page. That’s good for Google, but it might not be in users’ best interests, particularly if the other services have better information, since they’ve specialized in exactly this area and have been doing it for years.

I’ve wondered what model of antitrust-relevant conduct Professor Edelman, an economist, has in mind.  It is certainly well known in both the theoretical and empirical antitrust economics literature that “bias” is neither necessary nor sufficient for a theory of consumer harm; further, it is fairly obvious as a matter of economics that vertical integration can be, and typically is, both efficient and pro-consumer.  Still further, the bulk of economic theory and evidence on these contracts suggest that they are generally efficient and a normal part of the competitive process generating consumer benefits.  Vertically integrated firms may “bias” their own content in ways that increase output; the relevant point is that self-promoting incentives in a vertical relationship can be either efficient or anticompetitive depending on the circumstances of the situation.  The empirical literature suggests that such relationships are mostly pro-competitive and that restrictions upon firms’ ability to enter them generally reduce consumer welfare.  Edelman is an economist, with a Ph.D. from Harvard no less, and so I find it a bit odd that he has framed the “bias” debate outside of this framework, without regard to consumer welfare, and without reference to any of this literature or perhaps even an awareness of it.  Edelman’s approach appears to be a declaration that a search engine’s placement of its own content, algorithmically or otherwise, constitutes an antitrust harm because it may harm rivals — regardless of the consequences for consumers.  Antitrust observers might parallel this view to the antiquated “harm to competitors is harm to competition” approach of antitrust dating back to the 1960s and prior.  These parallels would be accurate.  Edelman’s view is flatly inconsistent with conventional theories of anticompetitive exclusion presently enforced in modern competition agencies or antitrust courts.

But does Edelman present anything more than just a pre-New Learning-era bias against vertical integration?  I’m beginning to have my doubts.  In an interview in Politico (login required), Professor Edelman offers two quotes that illuminate the search-bias antitrust theory — unfavorably.  Professor Edelman begins with what he describes as a “simple” solution to the search bias problem:

I don’t think it’s out of the question given the complexity of what Google has built and its persistence in entering adjacent, ancillary markets. A much simpler approach, if you like things that are simple, would be to disallow Google from entering these adjacent markets. OK, you want to be dominant in search? Stay out of the vertical business, stay out of content.

The problems here should be obvious.  Yes, a per se prohibition on vertical integration by Google into other economic activities would be quite simple; simple and thoroughly destructive.  The mildly more interesting inquiry is what Edelman proposes Google ought provide.  May, under Edelman’s view of a proper regulatory regime, Google answer address search queries by providing a map?  May Google answer product queries with shopping results?  Is the answer to those questions “yes” if and only if Google serves up some one else’s shopping results or map?  What if consumers prefer Google’s shopping result or map because it is more responsive to the query.  Note once again that Edelman’s answers do not turn on consumer welfare.  His answers are a function of the anticipated impact of Google’s choices to engage in those activities upon rival vertical search engines.  Consumer welfare is not the center of Edelman’s analysis; indeed, it is unclear what role consumer welfare plays in Edelman’s analysis at all.  Edelman simply applies his prior presumption that Google’s conduct, even if it produces real gains for consumers, is or should be actionable as an antitrust claim upon a demonstration that Google’s own services are ranked highly on its own search engine — even if Google-affiliated content is ranked highly by other search engines!  (See Danny Sullivan making that point nicely in this post).  Edelman’s proscription ignores the efficiencies of vertical integration and the benefits to consumers entirely.  It may be possible to articulate a coherent anticompetitive theory involving so-called search bias that could then be tested against the real world evidence.  Edelman has not.

Professor Edelman’s other quotation from the profile of the “academic wunderkind” that drew my attention was the following answer in response to the question “which search engine do you use?”  After explaining that he probably uses Google and Bing in proportion to their market shares, Professor Edelman is quoted as saying:

If your house is on fire and you forgot the number for the fire department, I’d encourage you to use Google. When it counts, if Google is one percent better for one percent of searches and both options are free, you’d be crazy not to use it. But if everyone makes that decision, we head towards a monopoly and all the problems experience reveals when a company controls too much.

By my lights, there is no clearer example of the sacrifice of consumer welfare in Edelman’s approach to analyzing whether and how search engines and their results should be regulated.  Note the core of Professor Edelman’s position: if Google offers a superior product favored by all consumers, and if Google gains substantial market share because of this success as determined by consumers, we are collectively headed for serious problems redressable by regulation.  In these circumstances, given the (1) lack of consumer lock-in for search engine use, (2) the overwhelming evidence that vertical integration is generally pro-competitive, and (3) the fact that consumers are generally enjoying the use of free services — one might think that any consumer-minded regulatory approach would carefully attempt to identify and distinguish potentially anticompetitive conduct so as to minimize the burden to consumers from inevitable false positives.  With credit to antitrust and its hard-earned economic discipline, this is the approach suggested by modern antitrust doctrine.  U.S. antitrust law requires a demonstration that consumers will be harmed by a challenged practice — not merely rivals.  It is odd and troubling when an economist abandons the consumer welfare approach; it is yet more peculiar that an economist not only abandons the consumer welfare lodestar but also argues for (or at least presents an unequivocal willingness to accept) an ex ante prohibition on vertical integration altogether in this space.

I’ve no doubt that there are more sophisticated theories of which creative antitrust economists can conceive that come closer to satisfying the requirements of modern antitrust economics by focusing upon consumer welfare.  Certainly, the economists who identify those theories will have their shot at convincing the FTC.  Indeed, Section 5 might even open the door to theories ever-so slightly more creative and more open-ended that those that would be taken seriously in a Sherman Act inquiry.  However, antitrust economists can and should remain intensely focused upon the impact of the conduct at issue — in this case, prominent algorithmic placement of Google’s own affiliated content its rankings — on consumer welfare.  Because Professor Edelman’s views harken to the infamous days of antitrust that cast a pall over any business practice unpleasant for rivals — even if the practice delivered what consumers wanted.  Edelman’s theory is an offer to jeopardize consumers and protect rivals, and to brush the dust off antiquated antitrust theories and standards and apply them to today’s innovative online markets.  Modern antitrust has come a long way in its thinking over the past 50 years — too far to accept these competitor-centric theories of harm.

Search Bias and Antitrust

Josh Wright —  24 March 2011

There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the search provider.  For example, Google might list Google Maps prominently if one searches “maps” or Microsoft’s Bing might prominently place Microsoft affiliated content or products.

Apparently both antitrust investigations and Congressional hearings are in the works; regulators and commentators appear poised to attempt to impose “search neutrality” through antitrust or other regulatory means to limit or prohibit the ability of search engines (or perhaps just Google) to favor their own content.  At least one proposal goes so far as to advocate a new government agency to regulate search.  Of course, when I read proposals like this, I wonder where Google’s share of the “search market” will be by the time the new agency is built.

As with the net neutrality debate, I understand some of the push for search neutrality involves an intense push to discard traditional economically-grounded antitrust framework.  The logic for this push is simple.  The economic literature on vertical restraints and vertical integration provides no support for ex ante regulation arising out of the concern that a vertically integrating firm will harm competition through favoring its own content and discriminating against rivals.  Economic theory suggests that such arrangements may be anticompetitive in some instances, but also provides a plethora of pro-competitive explanations.  Lafontaine & Slade explain the state of the evidence in their recent survey paper in the Journal of Economic Literature:

We are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. Furthermore, we have found clear evidence that restrictions on vertical integration that are imposed, often by local authorities, on owners of retail networks are usually detrimental to consumers. Given the weight of the evidence, it behooves government agencies to reconsider the validity of such restrictions.

Of course, this does not bless all instances of vertical contracts or integration as pro-competitive.  The antitrust approach appropriately eschews ex ante regulation in favor of a fact-specific rule of reason analysis that requires plaintiffs to demonstrate competitive harm in a particular instance. Again, given the strength of the empirical evidence, it is no surprise that advocates of search neutrality, as net neutrality before it, either do not rely on consumer welfare arguments or are willing to sacrifice consumer welfare for other objectives.

I wish to focus on the antitrust arguments for a moment.  In an interview with the San Francisco Gate, Harvard’s Ben Edelman sketches out an antitrust claim against Google based upon search bias; and to his credit, Edelman provides some evidence in support of his claim.

I’m not convinced.  Edelman’s interpretation of evidence of search bias is detached from antitrust economics.  The evidence is all about identifying whether or not there is bias.  That, however, is not the relevant antitrust inquiry; instead, the question is whether such vertical arrangements, including preferential treatment of one’s own downstream products, are generally procompetitive or anticompetitive.  Examples from other contexts illustrate this point.

Continue Reading…

One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims by Google that Microsoft has been copying its answers–using Google search results to bolster the relevance of its own results for certain search terms.  The full story from Internet search journalist extraordinaire, Danny Sullivan, is here, with a follow up discussing Microsoft’s response here.  The New York Times is also on the case with some interesting comments from a former Googler that feed nicely into the Schumpeterian competition angle (discussed below).  And Microsoft consultant (“though on matters unrelated to issues discussed here”)  and Harvard Business prof Ben Edelman coincidentally echoes precisely Microsoft’s response in a blog post here.

What I find so great about this story is how it seems to resolve one of the most significant strands of the ongoing debate–although it does so, from Microsoft’s point of view, unintentionally, to be sure.

Here’s what I mean.  Back when Microsoft first started being publicly identified as a significant instigator of regulatory and antitrust attention paid to Google, the company, via its chief competition counsel, Dave Heiner, defended its stance in large part on the following ground:

All of this is quite important because search is so central to how people navigate the Internet, and because advertising is the main monetization mechanism for a wide range of Web sites and Web services. Both search and online advertising are increasingly controlled by a single firm, Google. That can be a problem because Google’s business is helped along by significant network effects (just like the PC operating system business). Search engine algorithms “learn” by observing how users interact with search results. Google’s algorithms learn less common search terms better than others because many more people are conducting searches on these terms on Google.

These and other network effects make it hard for competing search engines to catch up. Microsoft’s well-received Bing search engine is addressing this challenge by offering innovations in areas that are less dependent on volume. But Bing needs to gain volume too, in order to increase the relevance of search results for less common search terms. That is why Microsoft and Yahoo! are combining their search volumes. And that is why we are concerned about Google business practices that tend to lock in publishers and advertisers and make it harder for Microsoft to gain search volume. (emphasis added).

Claims of “network effects” “increasing returns to scale” and the absence of “minimum viable scale” for competitors run rampant (and unsupported) in the various cases against Google.  The TradeComet complaint, for example, claims that

[t]he primary barrier to entry facing vertical search websites is the inability to draw enough search traffic to reach the critical mass necessary to become independently sustainable.

But now we discover (what we should have known all along) that “learning by doing” is not the only way to obtain the data necessary to generate relevant search results: “Learning by copying” works, as well.  And there’s nothing wrong with it–in fact, the very process of Schumpeterian creative destruction assumes imitation.

As Armen Alchian notes in describing his evolutionary process of competition,

Neither perfect knowledge of the past nor complete awareness of the current state of the arts gives sufficient foresight to indicate profitable action . . . [and] the pervasive effects of uncertainty prevent the ascertainment of actions which are supposed to be optimal in achieving profits.  Now the consequence of this is that modes of behavior replace optimum equilibrium conditions as guiding rules of action. First, wherever successful enterprises are observed, the elements common to these observable successes will be associated with success and copied by others in their pursuit of profits or success. “Nothing succeeds like success.”

So on the one hand, I find the hand wringing about Microsoft’s “copying” Google’s results to be completely misplaced–just as the pejorative connotations of “embrace and extend” deployed against Microsoft itself when it was the target of this sort of scrutiny were bogus.  But, at the same time, I see this dynamic essentially decimating Microsoft’s (and others’) claims that Google has an unassailable position because no competitor can ever hope to match its size, and thus its access to information essential to the quality of search results, particularly when it comes to so-called “long-tail” search terms.

Long-tail search terms are queries that are extremely rare and, thus, for which there is little user history (information about which results searchers found relevant and clicked on) to guide future search results.  As Ben Edelman writes in his blog post (linked above) on this issue (trotting out, even while implicitly undercutting, the “minimum viable scale” canard):

Of course the reality is that Google’s high market share means Google gets far more searches than any other search engine. And Google’s popularity gives it a real advantage: For an obscure search term that gets 100 searches per month at Google, Bing might get just five or 10. Also, for more popular terms, Google can slice its data into smaller groups — which results are most useful to people from Boston versus New York, which results are best during the day versus at night, and so forth. So Google is far better equipped to figure out what results users favor and to tailor its listings accordingly. Meanwhile, Microsoft needs additional data, such as Toolbar and Related Sites data, to attempt to improve its results in a similar way.

But of course the “additional data” that Microsoft has access to here is, to a large extent, the same data that Google has.  Although Danny Sullivan’s follow up story (also linked above) suggests that Bing doesn’t do all it could to make use of Google’s data (for example, Bing does not, it seems, copy Google search results wholesale, nor does it use user behavior as extensively as it could (by, for example, seeing searches in Google and then logging the next page visited, which would give Bing a pretty good idea which sites in Google’s results users found most relevant)), it doesn’t change the fundamental fact that Microsoft and other search engines can overcome a significant amount of the so-called barrier to entry afforded by Google’s impressive scale by simply imitating much of what Google does (and, one hopes, also innovating enough to offer something better).

Perhaps Google is “better equipped to figure out what users favor.”  But it seems to me that only a trivial amount of this advantage is plausibly attributable to Google’s scale instead of its engineering and innovation.  The fact that Microsoft can (because of its own impressive scale in various markets) and does take advantage of accessible data to benefit indirectly from Google’s own prowess in search is a testament to the irrelevance of these unfortunately-pervasive scale and network effect arguments.