Archives For Internet search

The suit against Google was to be this century’s first major antitrust case and a model for high technology industries in the future. Now that we have passed the investigative hangover, the mood has turned reflective, and antitrust experts are now looking to place this case into its proper context. If it were brought, would the case have been on sure legal footing? Was this a prudent move for consumers? Was the FTC’s disposition of the case appropriate?

Join me this Friday, January 11, 2013 at 12:00 pm – 1:45 pm ET for an ABA Antitrust Section webinar to explore these questions, among others. I will be sharing the panel with an impressive group:

Hill B. Welford will moderate. Registration is open to everyone here and the outlay is zero. Remember — these events are not technically free because you have to give up some of your time, but I would be delighted if you did.

The Federal Trade Commission yesterday closed its investigation of Google’s search business (see my comment here) without taking action. The FTC did, however, enter into a settlement with Google over the licensing of Motorola Mobility’s standards-essential patents (SEPs). The FTC intends that agreement to impose some limits on an area of great complexity and vigorous debate among industry, patent experts and global standards bodies: The allowable process for enforcing FRAND (fair, reasonable and non-discriminatory) licensing of SEPs, particularly the use of injunctions by patent holders to do so. According to Chairman Leibowitz, “[t]oday’s landmark enforcement action will set a template for resolution of SEP licensing disputes across many industries.” That effort may or may not be successful. It also may be misguided.

In general, a FRAND commitment incentivizes innovation by allowing a SEP owner to recoup its investments and the value of its technology through licensing, while, at the same, promoting competition and avoiding patent holdup by ensuring that licensing agreements are reasonable. When the process works, and patent holders negotiate licensing rights in good faith, patents are licensed, industries advance and consumers benefit.

FRAND terms are inherently indeterminate and flexible—indeed, they often apply precisely in situations where licensors and licensees need flexibility because each licensing circumstance is nuanced and a one-size-fits-all approach isn’t workable. Superimposing process restraints from above isn’t necessarily the best thing in dealing with what amounts to a contract dispute. But few can doubt the benefits of greater clarity in this process; the question is whether the FTC’s particular approach to the problem sacrifices too much in exchange for such clarity.

The crux of the issue in the Google consent decree—and the most controversial aspect of SEP licensing negotiations—is the role of injunctions. The consent decree requires that, before Google sues to enjoin a manufacturer from using its SEPs without a license, the company must follow a prescribed path in licensing negotiations. In particular:

Under this Order, before seeking an injunction on FRAND-encumbered SEPs, Google must: (1) provide a potential licensee with a written offer containing all of the material license terms necessary to license its SEPs, and (2) provide a potential licensee with an offer of binding arbitration to determine the terms of a license that are not agreed upon. Furthermore, if a potential licensee seeks judicial relief for a FRAND determination, Google must not seek an injunction during the pendency of the proceeding, including appeals.

There are a few exceptions, summarized by Commissioner Ohlhausen:

These limitations include when the potential licensee (a) is outside the jurisdiction of the United States; (b) has stated in writing or sworn testimony that it will not license the SEP on any terms [in other words, is not a “willing licensee”]; (c) refuses to enter a license agreement on terms set in a final ruling of a court – which includes any appeals – or binding arbitration; or (d) fails to provide written confirmation to a SEP owner after receipt of a terms letter in the form specified by the Commission. They also include certain instances when a potential licensee has brought its own action seeking injunctive relief on its FRAND-encumbered SEPs.

To the extent that the settlement reinforces what Google (and other licensors) would do anyway, and even to the extent that it imposes nothing more than an obligation to inject a neutral third party into FRAND negotiations to assist the parties in resolving rate disputes, there is little to complain about. Indeed, this is the core of the agreement, and, importantly, it seems to preserve Google’s right to seek injunctions to enforce its patents, subject to the agreement’s process requirements.

Industry participants and standard-setting organizations have supported injunctions, and the seeking and obtaining of injunctions against infringers is not in conflict with SEP patentees’ obligations. Even the FTC, in its public comments, has stated that patent owners should be able to obtain injunctions on SEPs when an infringer has rejected a reasonable license offer. Thus, the long-anticipated announcement by the FTC in the Google case may help to provide some clarity to the future negotiation of SEP licenses, the possible use of binding arbitration, and the conditions under which seeking injunctive relief will be permissible (as an antitrust matter).

Nevertheless, U.S. regulators, including the FTC, have sometimes opined that seeking injunctions on products that infringe SEPs is not in the spirit of FRAND. Everyone seems to agree that more certainty is preferable; the real issue is whether and when injunctions further that aim or not (and whether and when they are anticompetitive).

In October, Renata Hesse, then Acting Assistant Attorney General for the Department of Justice’s Antitrust Division, remarked during a patent roundtable that

[I]t would seem appropriate to limit a patent holder’s right to seek an injunction to situations where the standards implementer is unwilling to have a neutral third-party determine the appropriate F/RAND terms or is unwilling to accept the F/RAND terms approved by such a third-party.

In its own 2011 Report on the “IP Marketplace,” the FTC acknowledged the fluidity and ambiguity surrounding the meaning of “reasonable” licensing terms and the problems of patent enforcement. While noting that injunctions may confer a costly “hold-up” power on licensors that wield them, the FTC nevertheless acknowledged the important role of injunctions in preserving the value of patents and in encouraging efficient private negotiation:

Three characteristics of injunctions that affect innovation support generally granting an injunction. The first and most fundamental is an injunction’s ability to preserve the exclusivity that provides the foundation of the patent system’s incentives to innovate. Second, the credible threat of an injunction deters infringement in the first place. This results from the serious consequences of an injunction for an infringer, including the loss of sunk investment. Third, a predictable injunction threat will promote licensing by the parties. Private contracting is generally preferable to a compulsory licensing regime because the parties will have better information about the appropriate terms of a license than would a court, and more flexibility in fashioning efficient agreements.

* * *

But denying an injunction every time an infringer’s switching costs exceed the economic value of the invention would dramatically undermine the ability of a patent to deter infringement and encourage innovation. For this reason, courts should grant injunctions in the majority of cases.…

Consistent with this view, the European Commission’s Deputy Director-General for Antitrust, Cecilio Madero Villarejo, recently expressed concern that some technology companies that complain of being denied a license on FRAND terms never truly intend to acquire licenses, but rather “want[] to create conditions for a competition case to be brought.”

But with the Google case, the Commission appears to back away from its seeming support for injunctions, claiming that:

Seeking and threatening injunctions against willing licensees of FRAND-encumbered SEPs undermines the integrity and efficiency of the standard-setting process and decreases the incentives to participate in the process and implement published standards. Such conduct reduces the value of standard setting, as firms will be less likely to rely on the standard-setting process.

Reconciling the FTC’s seemingly disparate views turns on the question of what a “willing licensee” is. And while the Google settlement itself may not magnify the problems surrounding the definition of that term, it doesn’t provide any additional clarity, either.

The problem is that, even in its 2011 Report, in which FTC noted the importance of injunctions, it defines a willing licensee as one who would license at a hypothetical, ex ante rate absent the threat of an injunction and with a different risk profile than an after-the-fact infringer. In other words, the FTC’s definition of willing licensee assumes a willingness to license only at a rate determined when an injunction is not available, and under the unrealistic assumption that the true value of a SEP can be known ex ante. Not surprisingly, then, the Commission finds it easy to declare an injunction invalid when a patentee demands a (higher) royalty rate in an actual negotiation, with actual knowledge of a patent’s value and under threat of an injunction.

As Richard Epstein, Scott Kieff and Dan Spulber discuss in critiquing the FTC’s 2011 Report:

In short, there is no economic basis to equate a manufacturer that is willing to commit to license terms before the adoption and launch of a standard, with one that instead expropriates patent rights at a later time through infringement. The two bear different risks and the late infringer should not pay the same low royalty as a party that sat down at the bargaining table and may actually have contributed to the value of the patent through its early activities. There is no economically meaningful sense in which any royalty set higher than that which a “willing licensee would have paid” at the pre-standardization moment somehow “overcompensates patentees by awarding more than the economic value of the patent.”

* * *

Even with a RAND commitment, the patent owner retains the valuable right to exclude (not merely receive later compensation from) manufacturers who are unwilling to accept reasonable license terms. Indeed, the right to exclude influences how those terms should be calculated, because it is quite likely that prior licensees in at least some areas will pay less if larger numbers of parties are allowed to use the same technology. Those interactive effects are ignored in the FTC calculations.

With this circular logic, all efforts by patentees to negotiate royalty rates after infringement has occurred can be effectively rendered anticompetitive if the patentee uses an injunction or the threat of an injunction against the infringer to secure its reasonable royalty.

The idea behind FRAND is rather simple (reward inventors; protect competition), but the practice of SEP licensing is much more complicated. Circumstances differ from case to case, and, more importantly, so do the parties’ views on what may constitute an appropriate licensing rate under FRAND. As I have written elsewhere, a single company may have very different views on the meaning of FRAND depending on whether it is the licensor or licensee in a given negotiation—and depending on whether it has already implemented a standard or not. As one court looking at the very SEPs at issue in the Google case has pointed out:

[T]he court is mindful that at the time of an initial offer, it is difficult for the offeror to know what would in fact constitute RAND terms for the offeree. Thus, what may appear to be RAND terms from the offeror’s perspective may be rejected out-of-pocket as non-RAND terms by the offeree. Indeed, it would appear that at any point in the negotiation process, the parties may have a genuine disagreement as to what terms and conditions of a license constitute RAND under the parties’ unique circumstances.

The fact that many firms engaged in SEP negotiations are simultaneously and repeatedly both licensors and licensees of patents governed by multiple SSOs further complicates the process—but also helps to ensure that it will reach a conclusion that promotes innovation and ensures that consumers reap the rewards.

In fact, an important issue in assessing the propriety of injunctions is the recognition that, in most cases, firms would rather license their patents and receive royalties than exclude access to their IP and receive no compensation (and incur the costs of protracted litigation, to boot). Importantly, for firms that both license out their own patents and license in those held by other firms (the majority of IT firms and certainly the norm for firms participating in SSOs), continued interactions on both sides of such deals help to ensure that licensing—not withholding—is the norm.

Companies are waging the smartphone patent wars with very different track records on SSO participation. Apple, for example, is relatively new to the mobile communications space and has relatively few SEPs, while other firms, like Samsung, are long-time players in the space with histories of extensive licensing (in both directions). But, current posturing aside, both firms have an incentive to license their patents, as Mark Summerfield notes:

Apple’s best course of action will most likely be to enter into licensing agreements with its competitors, which will not only result in significant revenues, but also push up the prices (or reduce the margins) on competitive products.

While some commentators make it sound as if injunctions threaten to cripple smartphone makers by preventing them from licensing essential technology on viable terms, companies in this space have been perfectly capable of orchestrating large-scale patent licensing campaigns. That these may increase costs to competitors is a feature—not a bug—of the system, representing the return on innovation that patents are intended to secure. Microsoft has wielded its sizeable patent portfolio to drive up the licensing fees paid by Android device manufacturers, and some commentators have even speculated that Microsoft makes more revenue from Android than Google does. But while Microsoft might prefer to kill Android with its patents, given the unlikeliness of this, as MG Siegler notes,

[T]he next best option is to catch a free ride on the Android train. Patent licensing deals already in place with HTC, General Dynamics, and others could mean revenues of over $1 billion by next year, as Forbes reports. And if they’re able to convince Samsung to sign one as well (which could effectively force every Android partner to sign one), we could be talking multiple billions of dollars of revenue each year.

Hand-wringing about patents is the norm, but so is licensing, and your smartphone exists, despite the thousands of patents that read on it, because the firms that hold those patents—some SEPs and some not—have, in fact, agreed to license them.

The inability to seek an injunction against an infringer, however, would ensure instead that patentees operate with reduced incentives to invest in technology and to enter into standards because they are precluded from benefiting from any subsequent increase in the value of their patents once they do so. As Epstein, Kieff and Spulber write:

The simple reality is that before a standard is set, it just is not clear whether a patent might become more or less valuable. Some upward pressure on value may be created later to the extent that the patent is important to a standard that is important to the market. In addition, some downward pressure may be caused by a later RAND commitment or some other factor, such as repeat play. The FTC seems to want to give manufacturers all of the benefits of both of these dynamic effects by in effect giving the manufacturer the free option of picking different focal points for elements of the damages calculations. The patentee is forced to surrender all of the benefit of the upward pressure while the manufacturer is allowed to get all of the benefit of the downward pressure.

Thus the problem with even the limited constraints imposed by the Google settlement: To the extent that the FTC’s settlement amounts to a prohibition on Google seeking injunctions against infringers unless the company accepts the infringer’s definition of “reasonable,” the settlement will harm the industry. It will reinforce a precedent that will likely reduce the incentives for companies and individuals to innovate, to participate in SSOs, and to negotiate in good faith.

Contrary to most assumptions about the patent system, it needs stronger, not weaker, property rules. With a no-injunction rule (whether explicit or de facto (as the Google settlement’s definition of “willing licensee” unfolds)), a potential licensee has little incentive to negotiate with a patent holder and can instead refuse to license, infringe, try its hand in court, avoid royalties entirely until litigation is finished (and sometimes even longer), and, in the end, never be forced to pay a higher royalty than it would have if it had negotiated before the true value of the patents was known.

Flooding the courts and discouraging innovation and peaceful negotiations hardly seem like benefits to the patent system or the market. Unfortunately, the FTC’s approach to SEP licensing exemplified by the Google settlement may do just that. Continue Reading…

I have been a critic of the Federal Trade Commission’s investigation into Google since it was a gleam in its competitors’ eyes—skeptical that there was any basis for a case, and concerned about the effect on consumers, innovation and investment if a case were brought.

While it took the Commission more than a year and a half to finally come to the same conclusion, ultimately the FTC had no choice but to close the case that was a “square peg, round hole” problem from the start.

Now that the FTC’s investigation has concluded, an examination of the nature of the markets in which Google operates illustrates why this crusade was ill-conceived from the start. In short, the “realities on the ground” strongly challenged the logic and relevance of many of the claims put forth by Google’s critics. Nevertheless, the politics are such that their nonsensical claims continue, in different forums, with competitors continuing to hope that they can wrangle a regulatory solution to their competitive problem.

The case against Google rested on certain assumptions about the functioning of the markets in which Google operates. Because these are tech markets, constantly evolving and complex, most assumptions about the scope of these markets and competitive effects within them are imperfect at best. But there are some attributes of Google’s markets—conveniently left out of the critics’ complaints— that, properly understood, painted a picture for the FTC that undermined the basic, essential elements of an antitrust case against the company.

That case was seriously undermined by the nature and extent of competition in the markets the FTC was investigating. Most importantly, casual references to a “search market” and “search advertising market” aside, Google actually competes in the market for targeted eyeballs: a market aimed to offer up targeted ads to interested users. Search offers a valuable opportunity for targeting an advertiser’s message, but it is by no means alone: there are myriad (and growing) other mechanisms to access consumers online.

Consumers use Google because they are looking for information — but there are lots of ways to do that. There are plenty of apps that circumvent Google, and consumers are increasingly going to specialized sites to find what they are looking for. The search market, if a distinct one ever existed, has evolved into an online information market that includes far more players than those who just operate traditional search engines.

We live in a world where what prevails today won’t prevail tomorrow. The tech industry is constantly changing, and it is the height of folly (and a serious threat to innovation and consumer welfare) to constrain the activities of firms competing in such an environment by pigeonholing the market. In other words, in a proper market, Google looks significantly less dominant. More important, perhaps, as search itself evolves, and as Facebook, Amazon and others get into the search advertising game, Google’s strong position even in the overly narrow “search market” is far from unassailable.

This is progress — creative destruction — not regress, and such changes should not be penalized.

Another common refrain from Google’s critics was that Google’s access to immense amounts of data used to increase the quality of its targeting presented a barrier to competition that no one else could match, thus protecting Google’s unassailable monopoly. But scale comes in lots of ways.

Even if scale doesn’t come cheaply, the fact that challenging firms might have to spend the same (or, in this case, almost certainly less) Google did in order to replicate its success is not a “barrier to entry” that requires an antitrust remedy. Data about consumer interests is widely available (despite efforts to reduce the availability of such data in the name of protecting “privacy”—which might actually create barriers to entry). It’s never been the case that a firm has to generate its own inputs for every product it produces — and there’s no reason to suggest search or advertising is any different.

Additionally, to defend a claim of monopolization, it is generally required to show that the alleged monopolist enjoys protection from competition through barriers to entry. In Google’s case, the barriers alleged were illusory. Bing and other recent entrants in the general search business have enjoyed success precisely because they were able to obtain the inputs (in this case, data) necessary to develop competitive offerings.

Meanwhile unanticipated competitors like Facebook, Amazon, Twitter and others continue to knock at Google’s metaphorical door, all of them entering into competition with Google using data sourced from creative sources, and all of them potentially besting Google in the process. Consider, for example, Amazon’s recent move into the targeted advertising market, competing with Google to place ads on websites across the Internet, but with the considerable advantage of being able to target ads based on searches, or purchases, a user has made on Amazon—the world’s largest product search engine.

Now that the investigation has concluded, we come away with two major findings. First, the online information market is dynamic, and it is a fool’s errand to identify the power or significance of any player in these markets based on data available today — data that is already out of date between the time it is collected and the time it is analyzed.

Second, each development in the market – whether offered by Google or its competitors and whether facilitated by technological change or shifting consumer preferences – has presented different, novel and shifting opportunities and challenges for companies interested in attracting eyeballs, selling ad space and data, earning revenue and obtaining market share. To say that Google dominates “search” or “online advertising” missed the mark precisely because there was simply nothing especially antitrust-relevant about either search or online advertising. Because of their own unique products, innovations, data sources, business models, entrepreneurship and organizations, all of these companies have challenged and will continue to challenge the dominant company — and the dominant paradigm — in a shifting and evolving range of markets.

It would be churlish not to give credit where credit is due—and credit is due the FTC. I continue to think the investigation should have ended before it began, of course, but the FTC is to be commended for reaching this result amidst an overwhelming barrage of pressure to “do something.”

But there are others in this sadly politicized mess for whom neither the facts nor the FTC’s extensive investigation process (nor the finer points of antitrust law) are enough. Like my four-year-old daughter, they just “want what they want,” and they will stamp their feet until they get it.

While competitors will be competitors—using the regulatory system to accomplish what they can’t in the market—they do a great disservice to the very customers they purport to be protecting in doing so. As Milton Friedman famously said, in decrying “The Business Community’s Suicidal Impulse“:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

I do blame businessmen when, in their political activities, individual businessmen and their organizations take positions that are not in their own self-interest and that have the effect of undermining support for free private enterprise. In that respect, businessmen tend to be schizophrenic. When it comes to their own businesses, they look a long time ahead, thinking of what the business is going to be like 5 to 10 years from now. But when they get into the public sphere and start going into the problems of politics, they tend to be very shortsighted.

Ironically, Friedman was writing about the antitrust persecution of Microsoft by its rivals back in 1999:

Is it really in the self-interest of Silicon Valley to set the government on Microsoft? Your industry, the computer industry, moves so much more rapidly than the legal process, that by the time this suit is over, who knows what the shape of the industry will be.… [Y]ou will rue the day when you called in the government.

Among Microsoft’s chief tormentors was Gary Reback. He’s spent the last few years beating the drum against Google—but singing from the same song book. Reback recently told the Washington Post, “if a settlement were to be proposed that didn’t include search, the institutional integrity of the FTC would be at issue.” Actually, no it wouldn’t. As a matter of fact, the opposite is true. It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search. Doing so would at least raise the possibility that it were doing so because of pressure and not the merits of the case. But not doing so in the face of such pressure? That can almost only be a function of institutional integrity.

As another of Google’s most-outspoken critics, Tom Barnett, noted:

[The FTC has] really put [itself] in the position where they are better positioned now than any other agency in the U.S. is likely to be in the immediate future to address these issues. I would encourage them to take the issues as seriously as they can. To the extent that they concur that Google has violated the law, there are very good reasons to try to address the concerns as quickly as possible.

As Barnett acknowledges, there is no question that the FTC investigated these issues more fully than anyone. The agency’s institutional culture and its committed personnel, together with political pressure, media publicity and endless competitor entreaties, virtually ensured that the FTC took the issues “as seriously as they [could]” – in fact, as seriously as anyone else in the world. There is simply no reasonable way to criticize the FTC for being insufficiently thorough in its investigation and conclusions.

Nor is there a basis for claiming that the FTC is “standing in the way” of the courts’ ability to review the issue, as Scott Cleland contends in an op-ed in the Hill. Frankly, this is absurd. Google’s competitors have spent millions pressuring the FTC to bring a case. But the FTC isn’t remotely the only path to the courts. As Commissioner Rosch admonished,

They can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Competitors have already beaten a path to the DOJ’s door, and investigations are still pending in the EU, Argentina, several US states, and elsewhere. That the agency that has leveled the fullest and best-informed investigation has concluded that there is no “there” there should give these authorities pause, but, sadly for consumers who would benefit from an end to competitors’ rent seeking, nothing the FTC has done actually prevents courts or other regulators from having a crack at Google.

The case against Google has received more attention from the FTC than the merits of the case ever warranted. It is time for Google’s critics and competitors to move on.

[Crossposted at Forbes.com]

Pretty interesting interview with Google’s Senior VP Amit Singhal on where search technology is headed.  In the article, Singhal describes the shift from a content-based, keyword index  to incorporating links and other signals to improve query results.  The most interesting part of the interview is about what is next.

Google now wants to transform words that appear on a page into entities that mean something and have related attributes. It’s what the human brain does naturally, but for computers, it’s known as Artificial Intelligence.

It’s a challenging task, but the work has already begun. Google is “building a huge, in-house understanding of what an entity is and a repository of what entities are in the world and what should you know about those entities,” said Singhal.

In 2010, Google purchased Freebase, a community-built knowledge base packed with some 12 million canonical entities. Twelve million is a good start, but Google has, according to Singhal, invested dramatically to “build a huge knowledge graph of interconnected entities and their attributes.”

The transition from a word-based index to this knowledge graph is a fundamental shift that will radically increase power and complexity. Singhal explained that the word index is essentially like the index you find at the back of a book: “A knowledge base is huge compared to the word index and far more refined or advanced.”

Right now Google is, Singhal told me, building the infrastructure for the more algorithmically complex search of tomorrow, and that task, of course, does include more computers. All those computers are helping the search giant build out the knowledge graph, which now has “north of 200 million entities.” What can you do with that kind of knowledge graph (or base)?

Initially, you just take baby steps. Although evidence of this AI-like intelligence is beginning to show up in Google Search results, most people probably haven’t even noticed it.

For example:

Type “Monet” into Google Search, for instance, and, along with the standard results, you’ll find a small area at the bottom: “Artwork Searches for Claude Monet.” In it are thumbnail results of the top five or six works by the master. Singhal says this is an indication that Google search is beginning to understand that Monet is a painter and that the most important thing about an artist is his greatest works.

When I note that this does not seem wildly different or more exceptional that the traditional results above, Singhal cautioned me that judging the knowledge graph’s power on this would be like judging an artist on work he did as a 12- or 24-month-old.

Check out the whole article.  Counterfactuals are always difficult — but its difficult to imagine a basis for arguments that the evolution of search technology would have been — or will be — better for consumers with government regulation.

By Berin Szoka, Geoffrey Manne & Ryan Radia

As has become customary with just about every new product announcement by Google these days, the company’s introduction on Tuesday of its new “Search, plus Your World” (SPYW) program, which aims to incorporate a user’s Google+ content into her organic search results, has met with cries of antitrust foul play. All the usual blustering and speculation in the latest Google antitrust debate has obscured what should, however, be the two key prior questions: (1) Did Google violate the antitrust laws by not including data from Facebook, Twitter and other social networks in its new SPYW program alongside Google+ content; and (2) How might antitrust restrain Google in conditioning participation in this program in the future?

The answer to the first is a clear no. The second is more complicated—but also purely speculative at this point, especially because it’s not even clear Facebook and Twitter really want to be included or what their price and conditions for doing so would be. So in short, it’s hard to see what there is to argue about yet.

Let’s consider both questions in turn.

Should Google Have Included Other Services Prior to SPYW’s Launch?

Google says it’s happy to add non-Google content to SPYW but, as Google fellow Amit Singhal told Danny Sullivan, a leading search engine journalist:

Facebook and Twitter and other services, basically, their terms of service don’t allow us to crawl them deeply and store things. Google+ is the only [network] that provides such a persistent service,… Of course, going forward, if others were willing to change, we’d look at designing things to see how it would work.

In a follow-up story, Sullivan quotes his interview with Google executive chairman Eric Schmidt about how this would work:

“To start with, we would have a conversation with them,” Schmidt said, about settling any differences.

I replied that with the Google+ suggestions now hitting Google, there was no need to have any discussions or formal deals. Google’s regular crawling, allowed by both Twitter and Facebook, was a form of “automated conversation” giving Google material it could use.

“Anything we do with companies like that, it’s always better to have a conversion,” Schmidt said.

MG Siegler calls this “doublespeak” and seems to think Google violated the antitrust laws by not making SPYW more inclusive right out of the gate. He insists Google didn’t need permission to include public data in SPYW:

Both Twitter and Facebook have data that is available to the public. It’s data that Google crawls. It’s data that Google even has some social context for thanks to older Google Profile features, as Sullivan points out.

It’s not all the data inside the walls of Twitter and Facebook — hence the need for firehose deals. But the data Google can get is more than enough for many of the high level features of Search+ — like the “People and Places” box, for example.

It’s certainly true that if you search Google for “site:twitter.com” or “site:facebook.com,” you’ll get billions of search results from publicly-available Facebook and Twitter pages, and that Google already has some friend connection data via social accounts you might have linked to your Google profile (check out this dashboard), as Sullivan notes. But the public data isn’t available in real-time, and the private, social connection data is limited and available only for users who link their accounts. For Google to access real-time results and full social connection data would require… you guessed it… permission from Twitter (or Facebook)! As it happens, Twitter and Google had a deal for a “data firehose” so that Google could display tweets in real-time under the “personalized search” program for public social information that SPYW builds on top of. But Twitter ended the deal last May for reasons neither company has explained.

At best, therefore, Google might have included public, relatively stale social information from Twitter and Facebook in SPYW—content that is, in any case, already included in basic search results and remains available there. The real question, however, isn’t could Google have included this data in SPYW, but rather need they have? If Google’s engineers and executives decided that the incorporation of this limited data would present an inconsistent user experience or otherwise diminish its uniquely new social search experience, it’s hard to fault the company for deciding to exclude it. Moreover, as an antitrust matter, both the economics and the law of anticompetitive product design are uncertain. In general, as with issues surrounding the vertical integration claims against Google, product design that hurts rivals can (it should be self-evident) be quite beneficial for consumers. Here, it’s difficult to see how the exclusion of non-Google+ social media from SPYW could raise the costs of Google’s rivals, result in anticompetitive foreclosure, retard rivals’ incentives for innovation, or otherwise result in anticompetitive effects (as required to establish an antitrust claim).

Further, it’s easy to see why Google’s lawyers would prefer express permission from competitors before using their content in this way. After all, Google was denounced last year for “scraping” a different type of social content, user reviews, most notably by Yelp’s CEO at the contentious Senate antitrust hearing in September. Perhaps one could distinguish that situation from this one, but it’s not obvious where to draw the line between content Google has a duty to include without “making excuses” about needing permission and content Google has a duty not to include without express permission. Indeed, this seems like a case of “damned if you do, damned if you don’t.” It seems only natural for Google to be gun-shy about “scraping” other services’ public content for use in its latest search innovation without at least first conducting, as Eric Schmidt puts it, a “conversation.”

And as we noted, integrating non-public content would require not just permission but active coordination about implementation. SPYW displays Google+ content only to users who are logged into their Google+ account. Similarly, to display content shared with a user’s friends (but not the world) on Facebook, or protected tweets, Google would need a feed of that private data and a way of logging the user into his or her account on those sites.

Now, if Twitter truly wants Google to feature tweets in Google’s personalized search results, why did Twitter end its agreement with Google last year? Google responded to Twitter’s criticism of its SPYW launch last night with a short Google+ statement:

We are a bit surprised by Twitter’s comments about Search plus Your World, because they chose not to renew their agreement with us last summer, and since then we have observed their rel=nofollow instructions [by removing Twitter content results from “personalized search” results].

Perhaps Twitter simply got a better deal: Microsoft may have paid Twitter $30 million last year for a similar deal allowing Bing users to receive Twitter results. If Twitter really is playing hardball, Google is not guilty of discriminating against Facebook and Twitter in favor of its own social platform. Rather, it’s simply unwilling to pony up the cash that Facebook and Twitter are demanding—and there’s nothing illegal about that.

Indeed, the issue may go beyond a simple pricing dispute. If you were CEO of Twitter or Facebook, would you really think it was a net-win if your users could use Google search as an interface for your site? After all, these social networking sites are in an intense war for eyeballs: the more time users spend on Google, the more ads Google can sell, to the detriment of Facebook or Twitter. Facebook probably sees itself increasingly in direct competition with Google as a tool for finding information. Its social network has vastly more users than Google+ (800 million v 62 million, but even larger lead in active users), and, in most respects, more social functionality. The one area where Facebook lags is search functionality. Would Facebook really want to let Google become the tool for searching social networks—one social search engine “to rule them all“? Or would Facebook prefer to continue developing “social search” in partnership with Bing? On Bing, it can control how its content appears—and Facebook sees Microsoft as a partner, not a rival (at least until it can build its own search functionality inside the web’s hottest property).

Adding to this dynamic, and perhaps ultimately fueling some of the fire against SPYW, is the fact that many Google+ users seem to be multi-homing, using both Facebook and Google+ (and other social networks) at the same time, and even using various aggregators and syncing tools (Start Google+, for example) to unify social media streams and share content among them. Before SPYW, this might have seemed like a boon to Facebook, staunching any potential defectors from its network onto Google+ by keeping them engaged with both, with a kind of “Facebook primacy” ensuring continued eyeball time on its site. But Facebook might see SPYW as a threat to this primacy—in effect, reversing users’ primary “home” as they effectively import their Facebook data into SPYW via their Google+ accounts (such as through Start Google+). If SPYW can effectively facilitate indirect Google searching of private Facebook content, the fears we suggest above may be realized, and more users may forego vistiing Facebook.com (and seeing its advertisers), accessing much of their Facebook content elsewhere—where Facebook cannot monetize their attention.

Amidst all the antitrust hand-wringing over SPYW and Google’s decision to “go it alone” for now, it’s worth noting that Facebook has remained silent. Even Twitter has said little more than a tweet’s worth about the issue. It’s simply not clear that Google’s rivals would even want to participate in SPYW. This could still be bad for consumers, but in that case, the source of the harm, if any, wouldn’t be Google. If this all sounds speculative, it is—and that’s precisely the point. No one really knows. So, again, what’s to argue about on Day 3 of the new social search paradigm?

The Debate to Come: Conditioning Access to SPYW

While Twitter and Facebook may well prefer that Google not index their content on SPYW—at least, not unless Google is willing to pay up—suppose the social networking firms took Google up on its offer to have a “conversation” about greater cooperation. Google hasn’t made clear on what terms it would include content from other social media platforms. So it’s at least conceivable that, when pressed to make good on its lofty-but-vague offer to include other platforms, Google might insist on unacceptable terms. In principle, there are essentially three possibilities here:

  1. Antitrust law requires nothing because there are pro-consumer benefits for Google to make SPYW exclusive and no clear harm to competition (as distinct from harm to competitors) for doing so, as our colleague Josh Wright argues.
  2. Antitrust law requires Google to grant competitors access to SPYW on commercially reasonable terms.
  3. Antitrust law requires Google to grant such access on terms dictated by its competitors, even if unreasonable to Google.

Door #3 is a legal non-starter. In Aspen Skiing v. Aspen Highlands (1985), the Supreme Court came the closest it has ever come to endorsing the “essential facilities” doctrine by which a competitor has a duty to offer its facilities to competitors. But in Verizon Communications v. Trinko (2004), the Court made clear that even Aspen Skiing is “at or near the outer boundary of § 2 liability.” Part of the basis for the decision in Aspen Skiing was the existence of a prior, profitable relationship between the “essential facility” in question and the competitor seeking access. Although the assumption is neither warranted nor sufficient (circumstances change, of course, and merely “profitable” is not the same thing as “best available use of a resource”), the Court in Aspen Skiing seems to have been swayed by the view that the access in question was otherwise profitable for the company that was denying it. Trinko limited the reach of the doctrine to the extraordinary circumstances of Aspen Skiing, and thus, as the Court affirmed in Pacific Bell v. LinkLine (2008), it seems there is no antitrust duty for a firm to offer access to a competitor on commercially unreasonable terms (as Geoff Manne discusses at greater length in his chapter on search bias in TechFreedom’s free ebook, The Next Digital Decade).

So Google either has no duty to deal at all, or a duty to deal only on reasonable terms. But what would a competitor have to show to establish such a duty? And how would “reasonableness” be defined?

First, this issue parallels claims made more generally about Google’s supposed “search bias.” As Josh Wright has said about those claims, “[p]roperly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.” Supposing (for the moment) that the second point could be established, it’s hard to see how Facebook or Twitter could really show that being excluded from SPYW—while still having their available content show up as it always has in Google’s “organic” search results—would actually “render their efforts to compete for distribution uneconomical,” which, as Josh explains, antitrust law would require them to show. Google+ is a tiny service compared to Google or Facebook. And even Google itself, for all the awe and loathing it inspires, lags in the critical metric of user engagement, keeping the average user on site for only a quarter as much time as Facebook.

Moreover, by these same measures, it’s clear that Facebook and Twitter don’t need access to Google search results at all, much less its relatively trivial SPYW results, in order find, and be found by, users; it’s difficult to know from what even vaguely relevant market they could possibly be foreclosed by their absence from SPYW results. Does SPYW potentially help Google+, to Facebook’s detriment? Yes. Just as Facebook’s deal with Microsoft hurts Google. But this is called competition. The world would be a desolate place if antitrust laws effectively prohibited firms from making decisions that helped themselves at their competitors’ expense.

After all, no one seems to be suggesting that Microsoft should be forced to include Google+ results in Bing—and rightly so. Microsoft’s exclusive partnership with Facebook is an important example of how a market leader in one area (Facebook in social) can help a market laggard in another (Microsoft in search) compete more effectively with a common rival (Google). In other words, banning exclusive deals can actually make it more difficult to unseat an incumbent (like Google), especially where the technologies involved are constantly evolving, as here.

Antitrust meddling in such arrangements, particularly in high-risk, dynamic markets where large up-front investments are frequently required (and lost), risks deterring innovation and reducing the very dynamism from which consumers reap such incredible rewards. “Reasonable” is a dangerously slippery concept in such markets, and a recipe for costly errors by the courts asked to define the concept. We suspect that disputes arising out of these sorts of deals will largely boil down to skirmishes over pricing, financing and marketing—the essential dilemma of new media services whose business models are as much the object of innovation as their technologies. Turning these, by little more than innuendo, into nefarious anticompetitive schemes is extremely—and unnecessarily—risky. Continue Reading…

The web is all abuzz about possible antitrust implications concerning Google’s new personalized search (see, e.g., here and here), integrating search with Google Plus.  Here is Google’s description of “Search, plus Your World”:

We’re transforming Google into a search engine that understands not only content, but also people and relationships. We began this transformation with Social Search, and today we’re taking another big step in this direction by introducing three new features:

  1. Personal Results, which enable you to find information just for you, such as Google+ photos and posts—both your own and those shared specifically with you, that only you will be able to see on your results page;
  2. Profiles in Search, both in autocomplete and results, which enable you to immediately find people you’re close to or might be interested in following; and,
  3. People and Pages, which help you find people profiles and Google+ pages related to a specific topic or area of interest, and enable you to follow them with just a few clicks. Because behind most every query is a community.

The linked articles raising antitrust concerns largely talk about things like leveraging monopoly power in search into social networks and so forth.  The usual arguments.  For example:

By making Google+ such a large part of search — as well as Picasa — Google certainly is toeing the line of a company using monopoly to extend its reach into adjacent markets. Consider Microsoft’s moves with Internet Explorer, which was bundled with Windows starting in 1998. Microsoft used its monopoly on PC operating systems to nudge into the browser market, where Netscape had overwhelming market share lead. How is what Google is doing different?

Let’s start with the obvious differences: (1) the DOJ had to prove anticompetitive effects in Microsoft; (2) Microsoft was unable to muster up an efficiency justification.  Discussions of antitrust implications of any business practice that don’t focus on competitive effects and efficiency justifications are non-starters.

So let’s start with the most obvious thing that should come to mind when watching the integration of general search with Google Plus.   Integration!  Personalizing search results makes (at least some!) users better off.  Users that prefer non-personalized results can have them too.  But the trend toward providing a deeper, better, and different forms of answers to questions posed in search queries is not a Google-specific thing.  Its an industry thing driven by consumer preferences on the web.  When Google or Facebook or Twitter is able to integrate functions of search and social networking to create something different and demanded by consumers, that consumers enjoy and derive surplus from, this is a competitive benefit.  Competitive benefits count in antitrust because they make consumers better off.  This is very basic. But worth repeating.

The antitrust question is whether, despite these obvious efficiencies, there is plausible evidence of anticompetitive harm — that is, harm to competition rather than individual rivals like Bing, Twitter, or Facebook.  My personal view — which I’ve written about at great length here, here, and here — is that there is no such evidence.  But for now, the critical point is that antitrust analysis counts the integration of these functions in a manner satisfying consumer preferences — and it seems obvious that this integration produces results that consumers want — as an important consumer benefit.  This is a feature and not a bug of antitrust law.   Antitrust law that ignores or is biased against the efficiencies of vertical integration, or the introduction of new products integrating previously separate functions (like personalized search, or improved search results with maps), is at significant tension with economic theory and is simply not compatible with a consumer-welfare based competition regime.

By Geoffrey Manne and Berin Szoka

[Cross posted at TechFreedom.org]

Back in September, the Senate Judiciary Committee’s Antitrust Subcommittee held a hearing on “The Power of Google: Serving Consumers or Threatening Competition?” Given the harsh questioning from the Subcommittee’s Chairman Herb Kohl (D-WI) and Ranking Member Mike Lee (R-UT), no one should have been surprised by the letter they sent yesterday to the Federal Trade Commission asking for a “thorough investigation” of the company. At least this time the danger is somewhat limited: by calling for the FTC to investigate Google, the senators are thus urging the agency to do . . . exactly what it’s already doing.

So one must wonder about the real aim of the letter. Unfortunately, the goal does not appear to be to offer an objective appraisal of the complex issues intended to be addressed at the hearing. That’s disappointing (though hardly surprising) and underscores what we noted at the time of the hearing: There’s something backward about seeing a company hauled before a hostile congressional panel and asked to defend itself, rather than its self-appointed prosecutors being asked to defend their case.

Senators Kohl and Lee insist that they take no position on the legality of Google’s actions, but their lopsided characterization of the issues in the letter—and the fact that the FTC is already doing what they purport to desire as the sole outcome of the letter!—leaves little room for doubt about their aim: to put political pressure on the FTC not merely to investigate, but to reach a particular conclusion and bring a case in court (or simply to ratchet up public pressure from its bully pulpit).

The five page letter concludes with, literally, three sentences presenting Google’s case, one of which reads, in its entirety, “Google strongly denies the arguments of its critics.” The derision is palpable—as if only a craven monopolist would deign to actually deny the iron-clad arguments of Google’s competitors so painstakingly reproduced by Senators Kohl and Lee in the preceding four pages. This is neither rigorous analysis nor objective reporting on the contents of the Senate’s hearing.

While we worry about particularly successful companies being singled out for punishment, we hold no brief for Google in this debate. Instead, in all our writings, we’ve tried to present a consistently skeptical view about a worrisome trend in antitrust enforcement in high tech markets: error-prone and costly intervention in markets that are ill-understood and fast-moving, to the great detriment of consumers and progress generally. Although our institutions have received financial support from Google among a range of other companies, organizations and individuals, our work is focused on this broad mission; we have no obligation or intention to support any company simply because it finds value in supporting our mission.

We’ve defended (and one of us has even worked for) Microsoft in the past, and just yesterday, we lamented the fact that the Obama Justice Department and the FCC have effectively blocked Google’s arch-rival, AT&T, from buying T-Mobile. Rather than defend any particular company, our goal, to paraphrase Hayek, is to “demonstrate to [regulators] how little they really know about what they imagine they can design”—lest they undermine how competition actually works in the name of defending outdated models of how they think it should work. Unfortunately, the letter from Senators Kohl and Lee does nothing to assuage our concern and suggests instead that crass politics, rather than sensible economics, could determine the outcome of cases like this one—if not in a court of law, then in the court of public opinion and extra-legal intimidation.

To begin with, the letter asserts that “Google faces competition from only one general search engine, Bing,” suggesting that only Bing (and it, only ineffectively) could keep Google in check. In essence, the Senators are prejudging an essential question on which any case against Google would turn: market definition. But why would the market not include other tools for information retrieval? Is it not at least worth mentioning that more and more Internet users are finding information and spending time on social networks like Facebook and Twitter, while more and more advertisers are spending their money on these Google competitors? Isn’t it clear that search itself is evolving from “ten blue links” into something more social, multi-faceted and interactive?

In a remarkable leap, the senators then identify the specific alleged abuse that Google’s alleged market power leads to: search bias. That’s remarkable because, other than the breathless claims of disgruntled competitors (given plenty of air time at the September hearing), there is actually no evidence that search bias is, in fact, harmful to consumers—which is what antitrust is concerned with. (Read both sides of this debate in TechFreedom’s free ebook, The Next Digital Decade: Essays on the Future of the Internet.)

As our colleague, Josh Wright, has thoroughly demonstrated, this “own-content” bias is actually an infrequent phenomenon and is simply not consistent with an actionable claim of anticompetitive foreclosure. Moreover, among search engines, Google references its own content far less frequently than does Bing (which favors Microsoft content in the first search result when no other search engine does so more than twice as often as Google favors its own content).

Of course, none of this is even hinted at in the Senators’ letter, which seems intended to condemn Google for “preferencing” its own content (under the pretense of withholding judgment). It’s a little like condemning Target for deigning to use its trucks to supply inventory only to its own stores instead of Wal-Mart’s, or, say, condemning a congressman for targeting earmarks for his own state or district. Earmark bias! Continue Reading…

In my series of three posts (here, here and here) drawn from my empirical study on search bias I have examined whether search bias exists, and, if so, how frequently it occurs.  This, the final post in the series, assesses the results of the study (as well as the Edelman & Lockwood (E&L) study to which it responds) to determine whether the own-content bias I’ve identified is in fact consistent with anticompetitive foreclosure or is otherwise sufficient to warrant antitrust intervention.

As I’ve repeatedly emphasized, while I refer to differences among search engines’ rankings of their own or affiliated content as “bias,” without more these differences do not imply anticompetitive conduct.  It is wholly unsurprising and indeed consistent with vigorous competition among engines that differentiation emerges with respect to algorithms.  However, it is especially important to note that the theories of anticompetitive foreclosure raised by Google’s rivals involve very specific claims about these differences.  Properly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.  Unfortunately for search engine critics, their theories fail on both counts.  The observed own-content bias appears neither to be extensive enough to prevent rivals from gaining access to distribution nor does it appear to target Google’s rivals; rather, it seems to be a natural result of intense competition between search engines and of significant benefit to consumers.

Vertical foreclosure arguments are premised upon the notion that rivals are excluded with sufficient frequency and intensity as to render their efforts to compete for distribution uneconomical.  Yet the empirical results simply do not indicate that market conditions are in fact conducive to the types of harmful exclusion contemplated by application of the antitrust laws.  Rather, the evidence indicates that (1) the absolute level of search engine “bias” is extremely low, and (2) “bias” is not a function of market power, but an effective strategy that has arisen as a result of serious competition and innovation between and by search engines.  The first finding undermines competitive foreclosure arguments on their own terms, that is, even if there were no pro-consumer justifications for the integration of Google content with Google search results.  The second finding, even more importantly, reveals that the evolution of consumer preferences for more sophisticated and useful search results has driven rival search engines to satisfy that demand.  Both Bing and Google have shifted toward these results, rendering the complained-of conduct equivalent to satisfying the standard of care in the industry–not restraining competition.

A significant lack of search bias emerges in the representative sample of queries.  This result is entirely unsurprising, given that bias is relatively infrequent even in E&L’s sample of queries specifically designed to identify maximum bias.  In the representative sample, the total percentage of queries for which Google references its own content when rivals do not is even lower—only about 8%—meaning that Google favors its own content far less often than critics have suggested.  This fact is crucial and highly problematic for search engine critics, as their burden in articulating a cognizable antitrust harm includes not only demonstrating that bias exists, but further that it is actually competitively harmful.  As I’ve discussed, bias alone is simply not sufficient to demonstrate any prima facie anticompetitive harm as it is far more often procompetitive or competitively neutral than actively harmful.  Moreover, given that bias occurs in less than 10% of queries run on Google, anticompetitive exclusion arguments appear unsustainable.

Indeed, theories of vertical foreclosure find virtually zero empirical support in the data.  Moreover, it appears that, rather than being a function of monopolistic abuse of power, search bias has emerged as an efficient competitive strategy, allowing search engines to differentiate their products in ways that benefit consumers.  I find that when search engines do reference their own content on their search results pages, it is generally unlikely that another engine will reference this same content.  However, the fact that both this percentage and the absolute level of own content inclusion is similar across engines indicates that this practice is not a function of market power (or its abuse), but is rather an industry standard.  In fact, despite conducting a much smaller percentage of total consumer searches, Bing is consistently more biased than Google, illustrating that the benefits search engines enjoy from integrating their own content into results is not necessarily a function of search engine size or volume of queries.  These results are consistent with a business practice that is efficient and at significant tension with arguments that such integration is designed to facilitate competitive foreclosure. Continue Reading…

My last two posts on search bias (here and here) have analyzed and critiqued Edelman & Lockwood’s small study on search bias.  This post extends this same methodology and analysis to a random sample of 1,000 Google queries (released by AOL in 2006), to develop a more comprehensive understanding of own-content bias.  As I’ve stressed, these analyses provide useful—but importantly limited—glimpses into the nature of the search engine environment.  While these studies are descriptively helpful, actual harm to consumer welfare must always be demonstrated before cognizable antitrust injuries arise.  And naked identifications of own-content bias simply do not inherently translate to negative effects on consumers (see, e.g., here and here for more comprehensive discussion).

Now that’s settled, let’s jump into the results of the 1,000 random search query study.

How Do Search Engines Rank Their Own Content?

Consistent with our earlier analysis, a starting off point for thinking about measuring differentiation among search engines with respect to placing their own content is to compare how a search engine ranks its own content relative to how other engines place that same content (e.g. to compare how Google ranks “Google Maps” relative to how Bing or Blekko rank it).   Restricting attention exclusively to the first or “top” position, I find that Google simply does not refer to its own content in over 90% of queries.  Similarly, Bing does not reference Microsoft content in 85.4% of queries.  Google refers to its own content in the first position when other search engines do not in only 6.7% of queries; while Bing does so over twice as often, referencing Microsoft content that no other engine references in the first position in 14.3% of queries.  The following two charts illustrate the percentage of Google or Bing first position results, respectively, dedicated to own content across search engines.

The most striking aspect of these results is the small fraction of queries for which placement of own-content is relevant.  The results are similar when I expand consideration to the entire first page of results; interestingly, however, while the levels of own-content bias are similar considering the entire first page of results, Bing is far more likely than Google to reference its own content in its very first results position.

Examining Search Engine “Bias” on Google

Two distinct differences between the results of this larger study and my replication of Edelman & Lockwood emerge: (1) Google and Bing refer to their own content in a significantly smaller percentage of cases here than in the non-random sample; and (2) in general, when Google or Bing does rank its own content highly, rival engines are unlikely to similarly rank that same content.

The following table reports the percentages of queries for which Google’s ranking of its own content and its rivals’ rankings of that same content differ significantly. When Google refers to its own content within its Top 5 results, at least one other engine similarly ranks this content for only about 5% of queries.

The following table presents the likelihood that Google content will appear in a Google search, relative to searches conducted on rival engines (reported in odds ratios).

The first and third columns report results indicating that Google affiliated content is more likely to appear in a search executed on Google rather than rival engines.  Google is approximately 16 times more likely to refer to its own content on its first page as is any other engine.  Bing and Blekko are both significantly less likely to refer to Google content in their first result or on their first page than Google is to refer to Google content within these same parameters.  In each iteration, Bing is more likely to refer to Google content than is Blekko, and in the case of the first result, Bing is much more likely to do so.  Again, to be clear, the fact that Bing is more likely to rank its own content is not suggestive that the practice is problematic.  Quite the contrary, the demonstration that firms both with and without market power in search (to the extent that is a relevant antitrust market) engage in similar conduct the correct inference is that there must be efficiency explanations for the practice.  The standard response, of course, is that the competitive implications of a practice are different when a firm with market power does it.  That’s not exactly right.  It is true that firms with market power can engage in conduct that gives rise to potential antitrust problems when the same conduct from a firm without market power would not; however, when firms without market power engage in the same business practice it demands that antitrust analysts seriously consider the efficiency implications of the practice.  In other words, there is nothing in the mantra that things are “different” when larger firms do them that undercut potential efficiency explanations.

Examining Search Engine “Bias” on Bing

For queries within the larger sample, Bing refers to Microsoft content within its Top 1 and 3 results when no other engine similarly references this content for a slightly smaller percentage of queries than in my Edelman & Lockwood replication.  Yet Bing continues to exhibit a strong tendency to rank Microsoft content more prominently than rival engines.  For example, when Bing refers to Microsoft content within its Top 5 results, other engines agree with this ranking for less than 2% of queries; and Bing refers to Microsoft content that no other engine does within its Top 3 results for 99.2% of queries:

Regression analysis further illustrates Bing’s propensity to reference Microsoft content that rivals do not.  The following table reports the likelihood that Microsoft content is referred to in a Bing search as compared to searches on rival engines (again reported in odds ratios).

Bing refers to Microsoft content in its first results position about 56 times more often than rival engines refer to Microsoft content in this same position.  Across the entire first page, Microsoft content appears on a Bing search about 25 times more often than it does on any other engine.  Both Google and Blekko are accordingly significantly less likely to reference Microsoft content.  Notice further that, contrary to the findings in the smaller study, Google is slightly less likely to return Microsoft content than is Blekko, both in its first results position and across its entire first page.

A Closer Look at Google v. Bing

 Consistent with the smaller sample, I find again that Bing is more biased than Google using these metrics.  In other words, Bing ranks its own content significantly more highly than its rivals do more frequently then Google does, although the discrepancy between the two engines is smaller here than in the study of Edelman & Lockwood’s queries.  As noted above, Bing is over twice as likely to refer to own content in first results position than is Google.

Figures 7 and 8 present the same data reported above, but with Blekko removed, to allow for a direct visual comparison of own-content bias between Google and Bing.

Consistent with my earlier results, Bing appears to consistently rank Microsoft content higher than Google ranks the same (Microsoft) content more frequently than Google ranks Google content more prominently than Bing ranks the same (Google) content.

This result is particularly interesting given the strength of the accusations condemning Google for behaving in precisely this way.  That Bing references Microsoft content just as often as—and frequently even more often than!—Google references its own content strongly suggests that this behavior is a function of procompetitive product differentiation, and not abuse of market power.  But I’ll save an in-depth analysis of this issue for my next post, where I’ll also discuss whether any of the results reported in this series of posts support anticompetitive foreclosure theories or otherwise suggest antitrust intervention is warranted.

In my last post, I discussed Edelman & Lockwood’s (E&L’s) attempt to catch search engines in the act of biasing their results—as well as their failure to actually do so.  In this post, I present my own results from replicating their study.  Unlike E&L, I find that Bing is consistently more biased than Google, for reasons discussed further below, although neither engine references its own content as frequently as E&L suggest.

I ran searches for E&L’s original 32 non-random queries using three different search engines—Google, Bing, and Blekko—between June 23 and July 5 of this year.  This replication is useful, as search technology has changed dramatically since E&L recorded their results in August 2010.  Bing now powers Yahoo, and Blekko has had more time to mature and enhance its results.  Blekko serves as a helpful “control” engine in my study, as it is totally independent of Google and Microsoft, and so has no incentive to refer to Google or Microsoft content unless it is actually relevant to users.  In addition, because Blekko’s model is significantly different than Google and Microsoft’s, if results on all three engines agree that specific content is highly relevant to the user query, it lends significant credibility to the notion that the content places well on the merits rather than being attributable to bias or other factors.

How Do Search Engines Rank Their Own Content?

Focusing solely upon the first position, Google refers to its own products or services when no other search engine does in 21.9% of queries; in another 21.9% of queries, both Google and at least one other search engine rival (i.e. Bing or Blekko) refer to the same Google content with their first links.

But restricting focus upon the first position is too narrow.  Assuming that all instances in which Google or Bing rank their own content first and rivals do not amounts to bias would be a mistake; such a restrictive definition would include cases in which all three search engines rank the same content prominently—agreeing that it is highly relevant—although not all in the first position. 

The entire first page of results provides a more informative comparison.  I find that Google and at least one other engine return Google content on the first page of results in 7% of the queries.  Google refers to its own content on the first page of results without agreement from either rival search engine in only 7.9% of the queries.  Meanwhile, Bing and at least one other engine refer to Microsoft content in 3.2% of the queries.  Bing references Microsoft content without agreement from either Google or Blekko in 13.2% of the queries:

This evidence indicates that Google’s ranking of its own content differs significantly from its rivals in only 7.9% of queries, and that when Google ranks its own content prominently it is generally perceived as relevant.  Further, these results suggest that Bing’s organic search results are significantly more biased in favor of Microsoft content than Google’s search results are in favor of Google’s content.

Examining Search Engine “Bias” on Google

The following table presents the percentages of queries for which Google’s ranking of its own content differs significantly from its rivals’ ranking of that same content.

Note that percentages below 50 in this table indicate that rival search engines generally see the referenced Google content as relevant and independently believe that it should be ranked similarly.

So when Google ranks its own content highly, at least one rival engine typically agrees with this ranking; for example, when Google places its own content in its Top 3 results, at least one rival agrees with this ranking in over 70% of queries.  Bing especially agrees with Google’s rankings of Google content within its Top 3 and 5 results, failing to include Google content that Google ranks similarly in only a little more than a third of queries.

Examining Search Engine “Bias” on Bing

Bing refers to Microsoft content in its search results far more frequently than its rivals reference the same Microsoft content.  For example, Bing’s top result references Microsoft content for 5 queries, while neither Google nor Blekko ever rank Microsoft content in the first position:

This table illustrates the significant discrepancies between Bing’s treatment of its own Microsoft content relative to Google and Blekko.  Neither rival engine refers to Microsoft content Bing ranks within its Top 3 results; Google and Blekko do not include any Microsoft content Bing refers to on the first page of results in nearly 80% of queries.

Moreover, Bing frequently ranks Microsoft content highly even when rival engines do not refer to the same content at all in the first page of results.  For example, of the 5 queries for which Bing ranks Microsoft content in its top result, Google refers to only one of these 5 within its first page of results, while Blekko refers to none.  Even when comparing results across each engine’s full page of results, Google and Blekko only agree with Bing’s referral of Microsoft content in 20.4% of queries.

Although there are not enough Bing data to test results in the first position in E&L’s sample, Microsoft content appears as results on the first page of a Bing search about 7 times more often than Microsoft content appears on the first page of rival engines.  Also, Google is much more likely to refer to Microsoft content than Blekko, though both refer to significantly less Microsoft content than Bing.

A Closer Look at Google v. Bing

On E&L’s own terms, Bing results are more biased than Google results; rivals are more likely to agree with Google’s algorithmic assessment (than with Bing’s) that its own content is relevant to user queries.  Bing refers to Microsoft content other engines do not rank at all more often than Google refers its own content without any agreement from rivals.  Figures 1 and 2 display the same data presented above in order to facilitate direct comparisons between Google and Bing.

As Figures 1 and 2 illustrate, Bing search results for these 32 queries are more frequently “biased” in favor of its own content than are Google’s.  The bias is greatest for the Top 1 and Top 3 search results.

My study finds that Bing exhibits far more “bias” than E&L identify in their earlier analysis.  For example, in E&L’s study, Bing does not refer to Microsoft content at all in its Top 1 or Top 3 results; moreover, Bing refers to Microsoft content within its entire first page 11 times, while Google and Yahoo refer to Microsoft content 8 and 9 times, respectively.  Most likely, the significant increase in Bing’s “bias” differential is largely a function of Bing’s introduction of localized and personalized search results and represents serious competitive efforts on Bing’s behalf.

Again, it’s important to stress E&L’s limited and non-random sample, and to emphasize the danger of making strong inferences about the general nature or magnitude of search bias based upon these data alone.  However, the data indicate that Google’s own-content bias is relatively small even in a sample collected precisely to focus upon the queries most likely to generate it.  In fact—as I’ll discuss in my next post—own-content bias occurs even less often in a more representative sample of queries, strongly suggesting that such bias does not raise the competitive concerns attributed to it.