Archives For microsoft

I have been a critic of the Federal Trade Commission’s investigation into Google since it was a gleam in its competitors’ eyes—skeptical that there was any basis for a case, and concerned about the effect on consumers, innovation and investment if a case were brought.

While it took the Commission more than a year and a half to finally come to the same conclusion, ultimately the FTC had no choice but to close the case that was a “square peg, round hole” problem from the start.

Now that the FTC’s investigation has concluded, an examination of the nature of the markets in which Google operates illustrates why this crusade was ill-conceived from the start. In short, the “realities on the ground” strongly challenged the logic and relevance of many of the claims put forth by Google’s critics. Nevertheless, the politics are such that their nonsensical claims continue, in different forums, with competitors continuing to hope that they can wrangle a regulatory solution to their competitive problem.

The case against Google rested on certain assumptions about the functioning of the markets in which Google operates. Because these are tech markets, constantly evolving and complex, most assumptions about the scope of these markets and competitive effects within them are imperfect at best. But there are some attributes of Google’s markets—conveniently left out of the critics’ complaints— that, properly understood, painted a picture for the FTC that undermined the basic, essential elements of an antitrust case against the company.

That case was seriously undermined by the nature and extent of competition in the markets the FTC was investigating. Most importantly, casual references to a “search market” and “search advertising market” aside, Google actually competes in the market for targeted eyeballs: a market aimed to offer up targeted ads to interested users. Search offers a valuable opportunity for targeting an advertiser’s message, but it is by no means alone: there are myriad (and growing) other mechanisms to access consumers online.

Consumers use Google because they are looking for information — but there are lots of ways to do that. There are plenty of apps that circumvent Google, and consumers are increasingly going to specialized sites to find what they are looking for. The search market, if a distinct one ever existed, has evolved into an online information market that includes far more players than those who just operate traditional search engines.

We live in a world where what prevails today won’t prevail tomorrow. The tech industry is constantly changing, and it is the height of folly (and a serious threat to innovation and consumer welfare) to constrain the activities of firms competing in such an environment by pigeonholing the market. In other words, in a proper market, Google looks significantly less dominant. More important, perhaps, as search itself evolves, and as Facebook, Amazon and others get into the search advertising game, Google’s strong position even in the overly narrow “search market” is far from unassailable.

This is progress — creative destruction — not regress, and such changes should not be penalized.

Another common refrain from Google’s critics was that Google’s access to immense amounts of data used to increase the quality of its targeting presented a barrier to competition that no one else could match, thus protecting Google’s unassailable monopoly. But scale comes in lots of ways.

Even if scale doesn’t come cheaply, the fact that challenging firms might have to spend the same (or, in this case, almost certainly less) Google did in order to replicate its success is not a “barrier to entry” that requires an antitrust remedy. Data about consumer interests is widely available (despite efforts to reduce the availability of such data in the name of protecting “privacy”—which might actually create barriers to entry). It’s never been the case that a firm has to generate its own inputs for every product it produces — and there’s no reason to suggest search or advertising is any different.

Additionally, to defend a claim of monopolization, it is generally required to show that the alleged monopolist enjoys protection from competition through barriers to entry. In Google’s case, the barriers alleged were illusory. Bing and other recent entrants in the general search business have enjoyed success precisely because they were able to obtain the inputs (in this case, data) necessary to develop competitive offerings.

Meanwhile unanticipated competitors like Facebook, Amazon, Twitter and others continue to knock at Google’s metaphorical door, all of them entering into competition with Google using data sourced from creative sources, and all of them potentially besting Google in the process. Consider, for example, Amazon’s recent move into the targeted advertising market, competing with Google to place ads on websites across the Internet, but with the considerable advantage of being able to target ads based on searches, or purchases, a user has made on Amazon—the world’s largest product search engine.

Now that the investigation has concluded, we come away with two major findings. First, the online information market is dynamic, and it is a fool’s errand to identify the power or significance of any player in these markets based on data available today — data that is already out of date between the time it is collected and the time it is analyzed.

Second, each development in the market – whether offered by Google or its competitors and whether facilitated by technological change or shifting consumer preferences – has presented different, novel and shifting opportunities and challenges for companies interested in attracting eyeballs, selling ad space and data, earning revenue and obtaining market share. To say that Google dominates “search” or “online advertising” missed the mark precisely because there was simply nothing especially antitrust-relevant about either search or online advertising. Because of their own unique products, innovations, data sources, business models, entrepreneurship and organizations, all of these companies have challenged and will continue to challenge the dominant company — and the dominant paradigm — in a shifting and evolving range of markets.

It would be churlish not to give credit where credit is due—and credit is due the FTC. I continue to think the investigation should have ended before it began, of course, but the FTC is to be commended for reaching this result amidst an overwhelming barrage of pressure to “do something.”

But there are others in this sadly politicized mess for whom neither the facts nor the FTC’s extensive investigation process (nor the finer points of antitrust law) are enough. Like my four-year-old daughter, they just “want what they want,” and they will stamp their feet until they get it.

While competitors will be competitors—using the regulatory system to accomplish what they can’t in the market—they do a great disservice to the very customers they purport to be protecting in doing so. As Milton Friedman famously said, in decrying “The Business Community’s Suicidal Impulse“:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

I do blame businessmen when, in their political activities, individual businessmen and their organizations take positions that are not in their own self-interest and that have the effect of undermining support for free private enterprise. In that respect, businessmen tend to be schizophrenic. When it comes to their own businesses, they look a long time ahead, thinking of what the business is going to be like 5 to 10 years from now. But when they get into the public sphere and start going into the problems of politics, they tend to be very shortsighted.

Ironically, Friedman was writing about the antitrust persecution of Microsoft by its rivals back in 1999:

Is it really in the self-interest of Silicon Valley to set the government on Microsoft? Your industry, the computer industry, moves so much more rapidly than the legal process, that by the time this suit is over, who knows what the shape of the industry will be.… [Y]ou will rue the day when you called in the government.

Among Microsoft’s chief tormentors was Gary Reback. He’s spent the last few years beating the drum against Google—but singing from the same song book. Reback recently told the Washington Post, “if a settlement were to be proposed that didn’t include search, the institutional integrity of the FTC would be at issue.” Actually, no it wouldn’t. As a matter of fact, the opposite is true. It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search. Doing so would at least raise the possibility that it were doing so because of pressure and not the merits of the case. But not doing so in the face of such pressure? That can almost only be a function of institutional integrity.

As another of Google’s most-outspoken critics, Tom Barnett, noted:

[The FTC has] really put [itself] in the position where they are better positioned now than any other agency in the U.S. is likely to be in the immediate future to address these issues. I would encourage them to take the issues as seriously as they can. To the extent that they concur that Google has violated the law, there are very good reasons to try to address the concerns as quickly as possible.

As Barnett acknowledges, there is no question that the FTC investigated these issues more fully than anyone. The agency’s institutional culture and its committed personnel, together with political pressure, media publicity and endless competitor entreaties, virtually ensured that the FTC took the issues “as seriously as they [could]” – in fact, as seriously as anyone else in the world. There is simply no reasonable way to criticize the FTC for being insufficiently thorough in its investigation and conclusions.

Nor is there a basis for claiming that the FTC is “standing in the way” of the courts’ ability to review the issue, as Scott Cleland contends in an op-ed in the Hill. Frankly, this is absurd. Google’s competitors have spent millions pressuring the FTC to bring a case. But the FTC isn’t remotely the only path to the courts. As Commissioner Rosch admonished,

They can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Competitors have already beaten a path to the DOJ’s door, and investigations are still pending in the EU, Argentina, several US states, and elsewhere. That the agency that has leveled the fullest and best-informed investigation has concluded that there is no “there” there should give these authorities pause, but, sadly for consumers who would benefit from an end to competitors’ rent seeking, nothing the FTC has done actually prevents courts or other regulators from having a crack at Google.

The case against Google has received more attention from the FTC than the merits of the case ever warranted. It is time for Google’s critics and competitors to move on.

[Crossposted at Forbes.com]

As the Google antitrust discussion heats up on its way toward some culmination at the FTC, I thought it would be helpful to address some of the major issues raised in the case by taking a look at what’s going on in the market(s) in which Google operates. To this end, I have penned a lengthy document — The Market Realities that Undermine the Antitrust Case Against Google — highlighting some of the most salient aspects of current market conditions and explaining how they fit into the putative antitrust case against Google.

While not dispositive, these “realities on the ground” do strongly challenge the logic and thus the relevance of many of the claims put forth by Google’s critics. The case against Google rests on certain assumptions about how the markets in which it operates function. But these are tech markets, constantly evolving and complex; most assumptions (and even “conclusions” based on data) are imperfect at best. In this case, the conventional wisdom with respect to Google’s alleged exclusionary conduct, the market in which it operates (and allegedly monopolizes), and the claimed market characteristics that operate to protect its position (among other things) should be questioned.

The reality is far more complex, and, properly understood, paints a picture that undermines the basic, essential elements of an antitrust case against the company.

The document first assesses the implications for Market Definition and Monopoly Power of these competitive realities. Of note:

  • Users use Google because they are looking for information — but there are lots of ways to do that, and “search” is not so distinct that a “search market” instead of, say, an “online information market” (or something similar) makes sense.
  • Google competes in the market for targeted eyeballs: a market aimed to offer up targeted ads to interested users. Search is important in this, but it is by no means alone, and there are myriad (and growing) other mechanisms to access consumers online.
  • To define the relevant market in terms of the particular mechanism that prevails to accomplish the matching of consumers and advertisers does not reflect the substitutability of other mechanisms that do the same thing but simply aren’t called “search.”
  • In a world where what prevails today won’t — not “might not,” but won’t — prevail tomorrow, it is the height of folly (and a serious threat to innovation and consumer welfare) to constrain the activities of firms competing in such an environment by pigeonholing the market.
  • In other words, in a proper market, Google looks significantly less dominant. More important, perhaps, as search itself evolves, and as Facebook, Amazon and others get into the search advertising game, Google’s strong position even in the overly narrow “search” market looks far from unassailable.

Next I address Anticompetitive Harm — how the legal standard for antitrust harm is undermined by a proper understanding of market conditions:

  • Antitrust law doesn’t require that Google or any other large firm make life easier for competitors or others seeking to access resources owned by these firms.
  • Advertisers are increasingly targeting not paid search but rather social media to reach their target audiences.
  • But even for those firms that get much or most of their traffic from “organic” search, this fact isn’t an inevitable relic of a natural condition over which only the alleged monopolist has control; it’s a business decision, and neither sensible policy nor antitrust law is set up to protect the failed or faulty competitor from himself.
  • Although it often goes unremarked, paid search’s biggest competitor is almost certainly organic search (and vice versa). Nextag may complain about spending money on paid ads when it prefers organic, but the real lesson here is that the two are substitutes — along with social sites and good old-fashioned email, too.
  • It is incumbent upon critics to accurately assess the “but for” world without the access point in question. Here, Nextag can and does use paid ads to reach its audience (and, it is important to note, did so even before it claims it was foreclosed from Google’s users). But there are innumerable other avenues of access, as well. Some may be “better” than others; some that may be “better” now won’t be next year (think how links by friends on Facebook to price comparisons on Nextag pages could come to dominate its readership).
  • This is progress — creative destruction — not regress, and such changes should not be penalized.

Next I take on the perennial issue of Error Costs and the Risks of Erroneous Enforcement arising from an incomplete and inaccurate understanding of Google’s market:

  • Microsoft’s market position was unassailable . . . until it wasn’t — and even at the time, many could have told you that its perceived dominance was fleeting (and many did).
  • Apple’s success (and the consumer value it has created), while built in no small part on its direct competition with Microsoft and the desktop PCs which run it, was primarily built on a business model that deviated from its once-dominant rival’s — and not on a business model that the DOJ’s antitrust case against the company either facilitated or anticipated.
  • Microsoft and Google’s other critic-competitors have more avenues to access users than ever before. Who cares if users get to these Google-alternatives through their devices instead of a URL? Access is access.
  • It isn’t just monopolists who prefer not to innovate: their competitors do, too. To the extent that Nextag’s difficulties arise from Google innovating, it is Nextag, not Google, that’s working to thwart innovation and fighting against dynamism.
  • Recall the furor around Google’s purchase of ITA, a powerful cautionary tale. As of September 2012, Google ranks 7th in visits among metasearch travel sites, with a paltry 1.4% of such visits. Residing at number one? FairSearch founding member, Kayak, with a whopping 61%. And how about FairSearch member Expedia? Currently, it’s the largest travel company in the world, and it has only grown in recent years.

The next section addresses the essential issue of Barriers to Entry and their absence:

  • One common refrain from Google’s critics is that Google’s access to immense amounts of data used to increase the quality of its targeting presents a barrier to competition that no one else can match, thus protecting Google’s unassailable monopoly. But scale comes in lots of ways.
  • It’s never been the case that a firm has to generate its own inputs into every product it produces — and there is no reason to suggest search/advertising is any different.
  • Meanwhile, Google’s chief competitor, Microsoft, is hardly hurting for data (even, quite creatively, culling data directly from Google itself), despite its claims to the contrary. And while regulators and critics may be looking narrowly and statically at search data, Microsoft is meanwhile sitting on top of copious data from unorthodox — and possibly even more valuable — sources.
  • To defend a claim of monopolization, it is generally required to show that the alleged monopolist enjoys protection from competition through barriers to entry. In Google’s case, the barriers alleged are illusory.

The next section takes on recent claims revolving around The Mobile Market and Google’s position (and conduct) there:

  • If obtaining or preserving dominance is simply a function of cash, Microsoft is sitting on some $58 billion of it that it can devote to that end. And JP Morgan Chase would be happy to help out if it could be guaranteed monopoly returns just by throwing its money at Bing. Like data, capital is widely available, and, also like data, it doesn’t matter if a company gets it from selling search advertising or from selling cars.
  • Advertisers don’t care whether the right (targeted) user sees their ads while playing Angry Birds or while surfing the web on their phone, and users can (and do) seek information online (and thus reveal their preferences) just as well (or perhaps better) through Wikipedia’s app as via a Google search in a mobile browser.
  • Moreover, mobile is already (and increasingly) a substitute for the desktop. Distinguishing mobile search from desktop search is meaningless when users use their tablets at home, perform activities that they would have performed at home away from home on mobile devices simply because they can, and where users sometimes search for places to go (for example) on mobile devices while out and sometimes on their computers before they leave.
  • Whatever gains Google may have made in search from its spread into the mobile world is likely to be undermined by the massive growth in social connectivity it has also wrought.
  • Mobile is part of the competitive landscape. All of the innovations in mobile present opportunities for Google and its competitors to best each other, and all present avenues of access for Google and its competitors to reach consumers.

The final section Concludes.

The lessons from all of this? There are two. First, these are dynamic markets, and it is a fool’s errand to identify the power or significance of any player in these markets based on data available today — data that is already out of date between the time it is collected and the time it is analyzed.

Second, each of these developments has presented different, novel and shifting opportunities and challenges for firms interested in attracting eyeballs, selling ad space and data, earning revenue and obtaining market share. To say that Google dominates “search” or “online advertising” misses the mark precisely because there is simply nothing especially antitrust-relevant about either search or online advertising. Because of their own unique products, innovations, data sources, business models, entrepreneurship and organizations, all of these companies have challenged and will continue to challenge the dominant company — and the dominant paradigm — in a shifting and evolving range of markets.

Perhaps most important is this:

Competition with Google may not and need not look exactly like Google itself, and some of this competition will usher in innovations that Google itself won’t be able to replicate. But this doesn’t make it any less competitive.  

Competition need not look identical to be competitive — that’s what innovation is all about. Just ask those famous buggy whip manufacturers.

On Tuesday the European Commission opened formal proceedings against Motorola Mobility based on its patent licensing practices surrounding some of its core cellular telephony, Internet video and Wi-fi technology. The Commission’s concerns, echoing those raised by Microsoft and Apple, center on Motorola’s allegedly high royalty rates and its efforts to use injunctions to enforce the “standards-essential patents” at issue.

As it happens, this development is just the latest, like so many in the tech world these days, in Microsoft’s ongoing regulatory, policy and legal war against Google, which announced in August it was planning to buy Motorola.

Microsoft’s claim and the Commission’s concern that Motorola’s royalty offer was, in Microsoft’s colorful phrase, “so over-reaching that no rational company could ever have accepted it or even viewed it as a legitimate offer,” is misplaced. Motorola is seeking a royalty rate for its patents that is seemingly in line with customary rates.

In fact, Microsoft’s claim that Motorola’s royalty ask is extraordinary is refuted by its own conduct. As one commentator notes:

Microsoft complained that it might have to pay a tribute of up to $22.50 for every $1,000 laptop sold, and suggested that it might be fairer to pay just a few cents. This is the firm that is thought to make $10 to $15 from every $500 Android device that is sold, and for a raft of trivial software patents, not standard essential ones.

Seemingly forgetting this, Microsoft criticizes Motorola’s royalty ask on its 50 H.264 video codec patents by comparing it to the amount Microsoft pays for more than 2000 other patents in the video codec’s patent pool, claiming that the former would cost it $4 billion while the latter costs it only $6.5 million. But this is comparing apples and oranges. It is not surprising to find some patents worth orders of magnitude more than others and to find that license rates are a complicated function of the contracting parties’ particular negotiating positions and circumstances. It is no more inherently inappropriate for Microsoft to rake in 2-3% of the price of every Nook Barnes and Nobles sells than it is for Motorola to net 2.25% of the price of each Windows-operated computer sold – which is the royalty rate Motorola is seeking and which Microsoft wants declared anticompetitive out of hand.

It’s not clear how much negotiation, if any, has taken place between the companies over the terms of Microsoft’s licensing of Motorola’s patents, but what is clear is that Microsoft’s complaint, echoed by the EC, is based on the size of Motorola’s initial royalty demand and its use of a legal injunction to enforce its patent rights. Unfortunately, neither of these is particularly problematic, especially in an environment where companies like Microsoft and Apple aggressively wield exactly such tools to gain a competitive negotiating edge over their own competitors.

The court adjudicating this dispute in the ongoing litigation in U.S. district court in Washington has thus far agreed. The court denied Microsoft’s request for summary judgment that Motorola’s royalty demand violated its RAND commitment, noting its disagreement with Microsoft’s claim that “it is always facially unreasonable for a proposed royalty rate to result in a larger royalty payment for products that have higher end prices. Indeed, Motorola has previously entered into licensing agreements for its declared-essential patents at royalty rates similar to those offered to Microsoft and with royalty rates based on the price of the end product.”

The staggering aggregate numbers touted by Microsoft in its complaint and repeated by bloggers and journalists the world over are not a function of Motorola seeking an exorbitant royalty but rather a function Microsoft’s selling a lot of operating systems and earning a lot of revenue doing it. While the aggregate number ($4 billion, according to Microsoft) is huge, it is, as the court notes, based on a royalty rate that is in line with similar agreements.

The court also takes issue with Microsoft’s contention that the mere offer of allegedly unreasonable terms constitutes a breach of Motorola’s RAND commitment to license its patents on commercially reasonable terms. Quite sensibly, the court notes:

[T]he court is mindful that at the time of an initial offer, it is difficult for the offeror to know what would in fact constitute RAND terms for the offeree. Thus, what may appear to be RAND terms from the offeror’s perspective may be rejected out-of-pocket as non-RAND terms by the offeree. Indeed, it would appear that at any point in the negotiation process, the parties may have a genuine disagreement as to what terms and conditions of a license constitute RAND under the parties’ unique circumstances.

Resolution of such an impasse may ultimately fall to the courts. Thus the royalty rate issue is in fact closely related to the second issue raised by the EC’s investigation: the use or threat of injunction to enforce standards-essential patents.

While some scholars and many policy advocates claim that injunctions in the standards context raise the specter of costly hold-ups (patent holders extracting not only the market value of their patent, but also a portion of the costs that the infringer would incur if it had to implement its technology without the patent), there is no empirical evidence supporting the claim that patent holdup is a pervasive problem.

And the theory doesn’t comfortably support such a claim, either. Motorola, for example, has no interest in actually enforcing an injunction: Doing so is expensive and, notably, not nearly as good for the bottom line as actually receiving royalties from an agreed-upon contract. Instead, injunctions are, just like the more-attenuated liability suit for patent infringement, a central aspect of our intellectual property system, the means by which innovators and their financiers can reasonably expect a return on their substantial up-front investments in technology development.

Moreover, and apparently unbeknownst to those who claim that injunctions are the antithesis of negotiated solutions to licensing contests, the threat of injunction actually facilitates efficient transacting. Injunctions provide clearer penalties than damage awards for failing to reach consensus and are thus better at getting both parties on to the table with matched expectations. And this is especially true in the standards-setting context where the relevant parties are generally repeat players and where they very often have both patents to license and the need to license patents from the standard—both of which help to induce everyone to come to the table, lest they find themselves closed off from patents essential to their own products.

Antitrust intervention in standard setting negotiations based on an allegedly high initial royalty rate offer or the use of an injunction to enforce a patent is misdirected and costly. One of the clearest statements of the need for antitrust restraint in the standard setting context comes from a June 2011 comment filed with the FTC:

[T]he existence of a RAND commitment to offer patent licenses should not preclude a patent holder from seeking preliminary injunctive relief. . . . Any uniform declaration that such relief would not be available if the patent holder has made a commitment to offer a RAND license for its essential patent claims in connection with a standard may reduce any incentives that implementers might have to engage in good faith negotiations with the patent holder.

Most of the SSOs and their stakeholders that have considered these proposals over the years have determined that there are only a limited number of situations where patent hold-up takes place in the context of standards-setting. The industry has determined that those situations generally are best addressed through bi-lateral negotiation (and, in rare cases, litigation) as opposed to modifying the SSO’s IPR policy [by precluding injunctions or mandating a particular negotiation process].

The statement’s author? Why, Microsoft, of course.

Patents are an important tool for encouraging the development and commercialization of advanced technology, as are standard setting organizations. Antitrust authorities should exercise great restraint before intervening in the complex commercial negotiations over technology patents and standards. In Motorola’s case, the evidence of conduct that might harm competition is absent, and all that remains are, in essence, allegations that Motorola is bargaining hard and enforcing its property rights. The EC should let competition run its course.

In my series of three posts (here, here and here) drawn from my empirical study on search bias I have examined whether search bias exists, and, if so, how frequently it occurs.  This, the final post in the series, assesses the results of the study (as well as the Edelman & Lockwood (E&L) study to which it responds) to determine whether the own-content bias I’ve identified is in fact consistent with anticompetitive foreclosure or is otherwise sufficient to warrant antitrust intervention.

As I’ve repeatedly emphasized, while I refer to differences among search engines’ rankings of their own or affiliated content as “bias,” without more these differences do not imply anticompetitive conduct.  It is wholly unsurprising and indeed consistent with vigorous competition among engines that differentiation emerges with respect to algorithms.  However, it is especially important to note that the theories of anticompetitive foreclosure raised by Google’s rivals involve very specific claims about these differences.  Properly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.  Unfortunately for search engine critics, their theories fail on both counts.  The observed own-content bias appears neither to be extensive enough to prevent rivals from gaining access to distribution nor does it appear to target Google’s rivals; rather, it seems to be a natural result of intense competition between search engines and of significant benefit to consumers.

Vertical foreclosure arguments are premised upon the notion that rivals are excluded with sufficient frequency and intensity as to render their efforts to compete for distribution uneconomical.  Yet the empirical results simply do not indicate that market conditions are in fact conducive to the types of harmful exclusion contemplated by application of the antitrust laws.  Rather, the evidence indicates that (1) the absolute level of search engine “bias” is extremely low, and (2) “bias” is not a function of market power, but an effective strategy that has arisen as a result of serious competition and innovation between and by search engines.  The first finding undermines competitive foreclosure arguments on their own terms, that is, even if there were no pro-consumer justifications for the integration of Google content with Google search results.  The second finding, even more importantly, reveals that the evolution of consumer preferences for more sophisticated and useful search results has driven rival search engines to satisfy that demand.  Both Bing and Google have shifted toward these results, rendering the complained-of conduct equivalent to satisfying the standard of care in the industry–not restraining competition.

A significant lack of search bias emerges in the representative sample of queries.  This result is entirely unsurprising, given that bias is relatively infrequent even in E&L’s sample of queries specifically designed to identify maximum bias.  In the representative sample, the total percentage of queries for which Google references its own content when rivals do not is even lower—only about 8%—meaning that Google favors its own content far less often than critics have suggested.  This fact is crucial and highly problematic for search engine critics, as their burden in articulating a cognizable antitrust harm includes not only demonstrating that bias exists, but further that it is actually competitively harmful.  As I’ve discussed, bias alone is simply not sufficient to demonstrate any prima facie anticompetitive harm as it is far more often procompetitive or competitively neutral than actively harmful.  Moreover, given that bias occurs in less than 10% of queries run on Google, anticompetitive exclusion arguments appear unsustainable.

Indeed, theories of vertical foreclosure find virtually zero empirical support in the data.  Moreover, it appears that, rather than being a function of monopolistic abuse of power, search bias has emerged as an efficient competitive strategy, allowing search engines to differentiate their products in ways that benefit consumers.  I find that when search engines do reference their own content on their search results pages, it is generally unlikely that another engine will reference this same content.  However, the fact that both this percentage and the absolute level of own content inclusion is similar across engines indicates that this practice is not a function of market power (or its abuse), but is rather an industry standard.  In fact, despite conducting a much smaller percentage of total consumer searches, Bing is consistently more biased than Google, illustrating that the benefits search engines enjoy from integrating their own content into results is not necessarily a function of search engine size or volume of queries.  These results are consistent with a business practice that is efficient and at significant tension with arguments that such integration is designed to facilitate competitive foreclosure. Continue Reading…

My last two posts on search bias (here and here) have analyzed and critiqued Edelman & Lockwood’s small study on search bias.  This post extends this same methodology and analysis to a random sample of 1,000 Google queries (released by AOL in 2006), to develop a more comprehensive understanding of own-content bias.  As I’ve stressed, these analyses provide useful—but importantly limited—glimpses into the nature of the search engine environment.  While these studies are descriptively helpful, actual harm to consumer welfare must always be demonstrated before cognizable antitrust injuries arise.  And naked identifications of own-content bias simply do not inherently translate to negative effects on consumers (see, e.g., here and here for more comprehensive discussion).

Now that’s settled, let’s jump into the results of the 1,000 random search query study.

How Do Search Engines Rank Their Own Content?

Consistent with our earlier analysis, a starting off point for thinking about measuring differentiation among search engines with respect to placing their own content is to compare how a search engine ranks its own content relative to how other engines place that same content (e.g. to compare how Google ranks “Google Maps” relative to how Bing or Blekko rank it).   Restricting attention exclusively to the first or “top” position, I find that Google simply does not refer to its own content in over 90% of queries.  Similarly, Bing does not reference Microsoft content in 85.4% of queries.  Google refers to its own content in the first position when other search engines do not in only 6.7% of queries; while Bing does so over twice as often, referencing Microsoft content that no other engine references in the first position in 14.3% of queries.  The following two charts illustrate the percentage of Google or Bing first position results, respectively, dedicated to own content across search engines.

The most striking aspect of these results is the small fraction of queries for which placement of own-content is relevant.  The results are similar when I expand consideration to the entire first page of results; interestingly, however, while the levels of own-content bias are similar considering the entire first page of results, Bing is far more likely than Google to reference its own content in its very first results position.

Examining Search Engine “Bias” on Google

Two distinct differences between the results of this larger study and my replication of Edelman & Lockwood emerge: (1) Google and Bing refer to their own content in a significantly smaller percentage of cases here than in the non-random sample; and (2) in general, when Google or Bing does rank its own content highly, rival engines are unlikely to similarly rank that same content.

The following table reports the percentages of queries for which Google’s ranking of its own content and its rivals’ rankings of that same content differ significantly. When Google refers to its own content within its Top 5 results, at least one other engine similarly ranks this content for only about 5% of queries.

The following table presents the likelihood that Google content will appear in a Google search, relative to searches conducted on rival engines (reported in odds ratios).

The first and third columns report results indicating that Google affiliated content is more likely to appear in a search executed on Google rather than rival engines.  Google is approximately 16 times more likely to refer to its own content on its first page as is any other engine.  Bing and Blekko are both significantly less likely to refer to Google content in their first result or on their first page than Google is to refer to Google content within these same parameters.  In each iteration, Bing is more likely to refer to Google content than is Blekko, and in the case of the first result, Bing is much more likely to do so.  Again, to be clear, the fact that Bing is more likely to rank its own content is not suggestive that the practice is problematic.  Quite the contrary, the demonstration that firms both with and without market power in search (to the extent that is a relevant antitrust market) engage in similar conduct the correct inference is that there must be efficiency explanations for the practice.  The standard response, of course, is that the competitive implications of a practice are different when a firm with market power does it.  That’s not exactly right.  It is true that firms with market power can engage in conduct that gives rise to potential antitrust problems when the same conduct from a firm without market power would not; however, when firms without market power engage in the same business practice it demands that antitrust analysts seriously consider the efficiency implications of the practice.  In other words, there is nothing in the mantra that things are “different” when larger firms do them that undercut potential efficiency explanations.

Examining Search Engine “Bias” on Bing

For queries within the larger sample, Bing refers to Microsoft content within its Top 1 and 3 results when no other engine similarly references this content for a slightly smaller percentage of queries than in my Edelman & Lockwood replication.  Yet Bing continues to exhibit a strong tendency to rank Microsoft content more prominently than rival engines.  For example, when Bing refers to Microsoft content within its Top 5 results, other engines agree with this ranking for less than 2% of queries; and Bing refers to Microsoft content that no other engine does within its Top 3 results for 99.2% of queries:

Regression analysis further illustrates Bing’s propensity to reference Microsoft content that rivals do not.  The following table reports the likelihood that Microsoft content is referred to in a Bing search as compared to searches on rival engines (again reported in odds ratios).

Bing refers to Microsoft content in its first results position about 56 times more often than rival engines refer to Microsoft content in this same position.  Across the entire first page, Microsoft content appears on a Bing search about 25 times more often than it does on any other engine.  Both Google and Blekko are accordingly significantly less likely to reference Microsoft content.  Notice further that, contrary to the findings in the smaller study, Google is slightly less likely to return Microsoft content than is Blekko, both in its first results position and across its entire first page.

A Closer Look at Google v. Bing

 Consistent with the smaller sample, I find again that Bing is more biased than Google using these metrics.  In other words, Bing ranks its own content significantly more highly than its rivals do more frequently then Google does, although the discrepancy between the two engines is smaller here than in the study of Edelman & Lockwood’s queries.  As noted above, Bing is over twice as likely to refer to own content in first results position than is Google.

Figures 7 and 8 present the same data reported above, but with Blekko removed, to allow for a direct visual comparison of own-content bias between Google and Bing.

Consistent with my earlier results, Bing appears to consistently rank Microsoft content higher than Google ranks the same (Microsoft) content more frequently than Google ranks Google content more prominently than Bing ranks the same (Google) content.

This result is particularly interesting given the strength of the accusations condemning Google for behaving in precisely this way.  That Bing references Microsoft content just as often as—and frequently even more often than!—Google references its own content strongly suggests that this behavior is a function of procompetitive product differentiation, and not abuse of market power.  But I’ll save an in-depth analysis of this issue for my next post, where I’ll also discuss whether any of the results reported in this series of posts support anticompetitive foreclosure theories or otherwise suggest antitrust intervention is warranted.

In my last post, I discussed Edelman & Lockwood’s (E&L’s) attempt to catch search engines in the act of biasing their results—as well as their failure to actually do so.  In this post, I present my own results from replicating their study.  Unlike E&L, I find that Bing is consistently more biased than Google, for reasons discussed further below, although neither engine references its own content as frequently as E&L suggest.

I ran searches for E&L’s original 32 non-random queries using three different search engines—Google, Bing, and Blekko—between June 23 and July 5 of this year.  This replication is useful, as search technology has changed dramatically since E&L recorded their results in August 2010.  Bing now powers Yahoo, and Blekko has had more time to mature and enhance its results.  Blekko serves as a helpful “control” engine in my study, as it is totally independent of Google and Microsoft, and so has no incentive to refer to Google or Microsoft content unless it is actually relevant to users.  In addition, because Blekko’s model is significantly different than Google and Microsoft’s, if results on all three engines agree that specific content is highly relevant to the user query, it lends significant credibility to the notion that the content places well on the merits rather than being attributable to bias or other factors.

How Do Search Engines Rank Their Own Content?

Focusing solely upon the first position, Google refers to its own products or services when no other search engine does in 21.9% of queries; in another 21.9% of queries, both Google and at least one other search engine rival (i.e. Bing or Blekko) refer to the same Google content with their first links.

But restricting focus upon the first position is too narrow.  Assuming that all instances in which Google or Bing rank their own content first and rivals do not amounts to bias would be a mistake; such a restrictive definition would include cases in which all three search engines rank the same content prominently—agreeing that it is highly relevant—although not all in the first position. 

The entire first page of results provides a more informative comparison.  I find that Google and at least one other engine return Google content on the first page of results in 7% of the queries.  Google refers to its own content on the first page of results without agreement from either rival search engine in only 7.9% of the queries.  Meanwhile, Bing and at least one other engine refer to Microsoft content in 3.2% of the queries.  Bing references Microsoft content without agreement from either Google or Blekko in 13.2% of the queries:

This evidence indicates that Google’s ranking of its own content differs significantly from its rivals in only 7.9% of queries, and that when Google ranks its own content prominently it is generally perceived as relevant.  Further, these results suggest that Bing’s organic search results are significantly more biased in favor of Microsoft content than Google’s search results are in favor of Google’s content.

Examining Search Engine “Bias” on Google

The following table presents the percentages of queries for which Google’s ranking of its own content differs significantly from its rivals’ ranking of that same content.

Note that percentages below 50 in this table indicate that rival search engines generally see the referenced Google content as relevant and independently believe that it should be ranked similarly.

So when Google ranks its own content highly, at least one rival engine typically agrees with this ranking; for example, when Google places its own content in its Top 3 results, at least one rival agrees with this ranking in over 70% of queries.  Bing especially agrees with Google’s rankings of Google content within its Top 3 and 5 results, failing to include Google content that Google ranks similarly in only a little more than a third of queries.

Examining Search Engine “Bias” on Bing

Bing refers to Microsoft content in its search results far more frequently than its rivals reference the same Microsoft content.  For example, Bing’s top result references Microsoft content for 5 queries, while neither Google nor Blekko ever rank Microsoft content in the first position:

This table illustrates the significant discrepancies between Bing’s treatment of its own Microsoft content relative to Google and Blekko.  Neither rival engine refers to Microsoft content Bing ranks within its Top 3 results; Google and Blekko do not include any Microsoft content Bing refers to on the first page of results in nearly 80% of queries.

Moreover, Bing frequently ranks Microsoft content highly even when rival engines do not refer to the same content at all in the first page of results.  For example, of the 5 queries for which Bing ranks Microsoft content in its top result, Google refers to only one of these 5 within its first page of results, while Blekko refers to none.  Even when comparing results across each engine’s full page of results, Google and Blekko only agree with Bing’s referral of Microsoft content in 20.4% of queries.

Although there are not enough Bing data to test results in the first position in E&L’s sample, Microsoft content appears as results on the first page of a Bing search about 7 times more often than Microsoft content appears on the first page of rival engines.  Also, Google is much more likely to refer to Microsoft content than Blekko, though both refer to significantly less Microsoft content than Bing.

A Closer Look at Google v. Bing

On E&L’s own terms, Bing results are more biased than Google results; rivals are more likely to agree with Google’s algorithmic assessment (than with Bing’s) that its own content is relevant to user queries.  Bing refers to Microsoft content other engines do not rank at all more often than Google refers its own content without any agreement from rivals.  Figures 1 and 2 display the same data presented above in order to facilitate direct comparisons between Google and Bing.

As Figures 1 and 2 illustrate, Bing search results for these 32 queries are more frequently “biased” in favor of its own content than are Google’s.  The bias is greatest for the Top 1 and Top 3 search results.

My study finds that Bing exhibits far more “bias” than E&L identify in their earlier analysis.  For example, in E&L’s study, Bing does not refer to Microsoft content at all in its Top 1 or Top 3 results; moreover, Bing refers to Microsoft content within its entire first page 11 times, while Google and Yahoo refer to Microsoft content 8 and 9 times, respectively.  Most likely, the significant increase in Bing’s “bias” differential is largely a function of Bing’s introduction of localized and personalized search results and represents serious competitive efforts on Bing’s behalf.

Again, it’s important to stress E&L’s limited and non-random sample, and to emphasize the danger of making strong inferences about the general nature or magnitude of search bias based upon these data alone.  However, the data indicate that Google’s own-content bias is relatively small even in a sample collected precisely to focus upon the queries most likely to generate it.  In fact—as I’ll discuss in my next post—own-content bias occurs even less often in a more representative sample of queries, strongly suggesting that such bias does not raise the competitive concerns attributed to it.

I am disappointed but not surprised to see that my former employer filed an official antitrust complaint against Google in the EU.  The blog post by Microsoft’s GC, Brad Smith, summarizing its complaint is here.

Most obviously, there is a tragic irony to the most antitrust-beleaguered company ever filing an antitrust complaint against its successful competitor.  Of course the specifics are not identical, but all of the atmospheric and general points that Microsoft itself made in response to the claims against it are applicable here.  It smacks of competitors competing not in the marketplace but in the regulators’ offices.  It promotes a kind of weird protectionism, directing the EU’s enforcement powers against a successful US company . . . at the behest of another US competitor.  Regulators will always be fighting last year’s battles to the great detriment of the industry.  Competition and potential competition abound, even where it may not be obvious (Linux for Microsoft; Facebook for Google, for example).  Etc.  Microsoft was once the world’s most powerful advocate for more sensible, restrained, error-cost-based competition policy.  That it now finds itself on the opposite side of this debate is unfortunate for all of us.

Brad’s blog post is eloquent (as he always is) and forceful.  And he acknowledges the irony.  And of course he may be right on the facts.  Unfortunately we’ll have to resort to a terribly-costly, irretrievably-flawed and error-prone process to find out–not that the process is likely to result in a very reliable answer anyway.  Where I think he is most off base is where he draws–and asks regulators to draw–conclusions about the competitive effects of the actions he describes.  It is certain that Google has another story and will dispute most or all of the facts.  But even without that information we can dispute the conclusions that Google’s actions, if true, are necessarily anticompetitive.  In fact, as Josh and I have detailed at length here and here, these sorts of actions–necessitated by the realities of complex, innovative and vulnerable markets and in many cases undertaken by the largest and the smallest competitors alike–are more likely pro-competitive.  More important, efforts to ferret out the anti-competitive among them will almost certainly harm welfare rather than help it–particularly when competitors are welcomed in to the regulators’ and politicians’ offices in the process.

As I said, disappointing.  It is not inherently inappropriate for Microsoft to resort to this simply because it has been the victim of such unfortunate “competition” in the past, nor is Microsoft obligated or expected to demonstrate intellectual or any other sort of consistency.  But knowing what it does about the irretrievable defects of the process and the inevitable costliness of its consequences, it is disingenuous or naive (the Nirvana fallacy) for it to claim that it is simply engaging in a reliable effort to smooth over a bumpy competitive landscape.  That may be the ideal of antitrust enforcement, but no one knows as well as Microsoft that the reality is far from that ideal.  To claim implicitly that, in this case, things will be different is, as I said, disingenuous.  And likely really costly in the end for all of us.

Former TOTM blog symposium participant Joshua Gans (visiting Microsoft Research) has a post at TAP on l’affair hiybbprqag, about which I blogged previously here.

Gans notes, as I did, that Microsoft is not engaged in wholesale copying of Google’s search results, even though doing so would be technologically feasible.  But Gans goes on to draw a normative conclusion:

Let’s start with “imitation,” “copying” and its stronger variants of “plagiarism” and “cheating.” Had Bing wanted to do this and directly map Google’s search results onto its own, it could have done it. It could have set up programs to enter terms in Google and skimmed off the results and then used them directly. And I think we can all agree that that is wrong. Why? Two reasons. First, if Google has invested to produce those results, if others can just hang off them and copy it, Google’s may not earn the return on its efforts it should do. Second, if Bing were doing this and representing itself as a different kind of search, then that misrepresentation would be misleading. Thus, imitation reduces Google’s reward for innovation while adding no value in terms of diversity.

His first reason why this would be wrong is . . . silly.  I mean, I don’t want to get into a moral debate, but since when is it wrong to engage in activity that “may” hamper another firm’s ability to earn the return on its effort that it “should” (whatever “should” means here)?  I always thought that was called “competition” and we encouraged it.  As I noted the other day, competition via imitation is an important part of Schumpeterian capitalism.  To claim that reducing another company’s profits via imitation is wrong, but doing so via innovation is good and noble, is to hang one’s hat on a distinction that does not really exist.

The second argument, that doing so would amount to misrepresentation, is possible, but I’m sure if Microsoft were actually just copying Google’s results their representations would look different than they do now and the problem would probably not exist, so this claim is speculative, at best.

Now, regardless, I doubt it would be profitable for Microsoft to copy Google wholesale, and this is basically just a red herring (as Gans understands–he goes on to discuss the more “innocuous” imitation at issue).  While I think Gans’ claims that it would be “wrong” are just hand waiving, I am confident it would be “wrong” from the point of view of Microsoft’s bottom line–or else they would already be doing it.  In this context, that would seem to be the only standard that matters, unless there were a legal basis for the claim.

On this score, Gans points us to Shane Greenstein (Kellogg).  Greenstein writes:

Let’s start with a weak standard, the law. Legally speaking, imitation is allowed so long as a firm does not violate laws governing patents, copyright, or trade secrets. Patents obviously do not apply to this situation, and neither does copyright  because Google does not get a copyright on a search result. It also does not appear as if Googles trade secrets were violated. So, generally speaking, it does not appear as if any law has been broken.

This is all well and good, but Greenstein goes on to engage in his own casual moralizing, and his comments are worth reproducing (imitating?) at some length:

The norms of rivalry

There is nothing wrong with one retailer walking through a rival’s shop and getting ideas for what to do. There is really nothing wrong with a designer of a piece of electronic equipment buying a rival’s product and studying it in order to get new ideas for a  better design. 

In the modern Internet, however, there is no longer any privacy for users. Providers want to know as much as they can, and generally the rich suppliers can learn quite a lot about user conduct and preferences.

That means that rivals can learn a great deal about how users conduct their business, even when they are at a rival’s site. It is as if one retailer had a camera in a rival’s store, or one designer could learn the names of the buyer’s of their rival’s products, and interview them right away.

In the offline world, such intimate familiarity with a rival’s users and their transactions would be uncomfortable. It would seem like an intrusion on the transaction between user and supplier. Why is it permissible in the online world? Why is there any confusion about this being an intrusion in the online world? Why isn’t Microsoft’s behavior seen — cut and dry — as an intrusion?

In other words, the transaction between supplier and user is between supplier and user, and nobody else should be able to observe it without permission of both supplier and user. The user alone does not have the right or ability to invite another party to observe all aspects of the transaction.

That is what bothers me about Bing’s behavior. There is nothing wrong with them observing users, but they are doing more than just that. They are observing their rival’s transaction with users. And learning from it. In other contexts that would not be allowed without explicit permission of both parties — both user and supplier.

Moreover, one party does not like it in this case, as they claim the transaction with users as something they have a right to govern and keep to themselves. There is some merit in that claim.

In most contexts it seems like the supplier’s wishes should be respected. Why not online? (emphasis mine)

Where on Earth do these moral standards come from?  In what way is it not “allowed” (whatever that means here) for a firm to observe and learn from a rival’s transactions with users?  I can see why the rival would prefer it to be otherwise, of course, but so what?  They would also prefer to eradicate their meddlesome rival entirely, if possible (hence Microsoft’s considerable engagement with antitrust authorities concerning Google’s business), but we hardly elevate such desires to the realm of the moral.

What I find most troublesome is the controlling, regulatory mindset implicit in these analyses.  Here’s Gans again:

Outright imitation of this type should be prohibited but what do we call some more innocuous types? Just look at how the look and feel of the iPhone has been adopted by some mobile software developers just as the consumer success of graphic based interfaces did in an earlier time. This certainly reduces Apple’s reward for its innovations but the hit on diversity is murkier because while some features are common, competitors have tried to differentiate themselves. So this is not imitation but it is something more common, leveraging without compensation and how you feel about it depends on just how much reward you think pioneers should receive.

It is usually politicians and not economists (other than politico-economists like Krugman) who think they have a handle on–and an obligation to do something about–things like “how much reward . . .pioneers should receive.”  I would have thought the obvious answer to the question would be either “the optimal amount, but good luck knowing what that is or expecting to find it in the real world,” or else, for the Second Best, “whatever the market gives them.”  The implication that there is some moral standard appreciable by human mortals, or even human economists, is a recipe for disaster.

One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims by Google that Microsoft has been copying its answers–using Google search results to bolster the relevance of its own results for certain search terms.  The full story from Internet search journalist extraordinaire, Danny Sullivan, is here, with a follow up discussing Microsoft’s response here.  The New York Times is also on the case with some interesting comments from a former Googler that feed nicely into the Schumpeterian competition angle (discussed below).  And Microsoft consultant (“though on matters unrelated to issues discussed here”)  and Harvard Business prof Ben Edelman coincidentally echoes precisely Microsoft’s response in a blog post here.

What I find so great about this story is how it seems to resolve one of the most significant strands of the ongoing debate–although it does so, from Microsoft’s point of view, unintentionally, to be sure.

Here’s what I mean.  Back when Microsoft first started being publicly identified as a significant instigator of regulatory and antitrust attention paid to Google, the company, via its chief competition counsel, Dave Heiner, defended its stance in large part on the following ground:

All of this is quite important because search is so central to how people navigate the Internet, and because advertising is the main monetization mechanism for a wide range of Web sites and Web services. Both search and online advertising are increasingly controlled by a single firm, Google. That can be a problem because Google’s business is helped along by significant network effects (just like the PC operating system business). Search engine algorithms “learn” by observing how users interact with search results. Google’s algorithms learn less common search terms better than others because many more people are conducting searches on these terms on Google.

These and other network effects make it hard for competing search engines to catch up. Microsoft’s well-received Bing search engine is addressing this challenge by offering innovations in areas that are less dependent on volume. But Bing needs to gain volume too, in order to increase the relevance of search results for less common search terms. That is why Microsoft and Yahoo! are combining their search volumes. And that is why we are concerned about Google business practices that tend to lock in publishers and advertisers and make it harder for Microsoft to gain search volume. (emphasis added).

Claims of “network effects” “increasing returns to scale” and the absence of “minimum viable scale” for competitors run rampant (and unsupported) in the various cases against Google.  The TradeComet complaint, for example, claims that

[t]he primary barrier to entry facing vertical search websites is the inability to draw enough search traffic to reach the critical mass necessary to become independently sustainable.

But now we discover (what we should have known all along) that “learning by doing” is not the only way to obtain the data necessary to generate relevant search results: “Learning by copying” works, as well.  And there’s nothing wrong with it–in fact, the very process of Schumpeterian creative destruction assumes imitation.

As Armen Alchian notes in describing his evolutionary process of competition,

Neither perfect knowledge of the past nor complete awareness of the current state of the arts gives sufficient foresight to indicate profitable action . . . [and] the pervasive effects of uncertainty prevent the ascertainment of actions which are supposed to be optimal in achieving profits.  Now the consequence of this is that modes of behavior replace optimum equilibrium conditions as guiding rules of action. First, wherever successful enterprises are observed, the elements common to these observable successes will be associated with success and copied by others in their pursuit of profits or success. “Nothing succeeds like success.”

So on the one hand, I find the hand wringing about Microsoft’s “copying” Google’s results to be completely misplaced–just as the pejorative connotations of “embrace and extend” deployed against Microsoft itself when it was the target of this sort of scrutiny were bogus.  But, at the same time, I see this dynamic essentially decimating Microsoft’s (and others’) claims that Google has an unassailable position because no competitor can ever hope to match its size, and thus its access to information essential to the quality of search results, particularly when it comes to so-called “long-tail” search terms.

Long-tail search terms are queries that are extremely rare and, thus, for which there is little user history (information about which results searchers found relevant and clicked on) to guide future search results.  As Ben Edelman writes in his blog post (linked above) on this issue (trotting out, even while implicitly undercutting, the “minimum viable scale” canard):

Of course the reality is that Google’s high market share means Google gets far more searches than any other search engine. And Google’s popularity gives it a real advantage: For an obscure search term that gets 100 searches per month at Google, Bing might get just five or 10. Also, for more popular terms, Google can slice its data into smaller groups — which results are most useful to people from Boston versus New York, which results are best during the day versus at night, and so forth. So Google is far better equipped to figure out what results users favor and to tailor its listings accordingly. Meanwhile, Microsoft needs additional data, such as Toolbar and Related Sites data, to attempt to improve its results in a similar way.

But of course the “additional data” that Microsoft has access to here is, to a large extent, the same data that Google has.  Although Danny Sullivan’s follow up story (also linked above) suggests that Bing doesn’t do all it could to make use of Google’s data (for example, Bing does not, it seems, copy Google search results wholesale, nor does it use user behavior as extensively as it could (by, for example, seeing searches in Google and then logging the next page visited, which would give Bing a pretty good idea which sites in Google’s results users found most relevant)), it doesn’t change the fundamental fact that Microsoft and other search engines can overcome a significant amount of the so-called barrier to entry afforded by Google’s impressive scale by simply imitating much of what Google does (and, one hopes, also innovating enough to offer something better).

Perhaps Google is “better equipped to figure out what users favor.”  But it seems to me that only a trivial amount of this advantage is plausibly attributable to Google’s scale instead of its engineering and innovation.  The fact that Microsoft can (because of its own impressive scale in various markets) and does take advantage of accessible data to benefit indirectly from Google’s own prowess in search is a testament to the irrelevance of these unfortunately-pervasive scale and network effect arguments.

We have just uploaded to SSRN a draft of our article assessing the economics and the law of the antitrust case directed at the core of Google’s business:  Its search and search advertising platform.  The article is Google and the Limits of Antitrust: The Case Against the Antitrust Case Against Google.  This is really the first systematic attempt to address both the amorphous and the concrete (as in the TradeComet complaint) claims about Google’s business and its legal and economic importance in its primary market.  It’s giving nothing away to say we’re skeptical of the claims, and, moreover, that an approach to the issues appropriately sensitive to the potential error costs would be extremely deferential.  As we discuss, the economics of search and search advertising are indeterminate and subtle, and the risk of error is high (claims of network effects, for example, are greatly exaggerated, and the pro-competitive justifications for Google’s use of a quality score are legion, despite frequent claims to the contrary).  We welcome comments on the article, and we look forward to the debate.  The abstract is here:

The antitrust landscape has changed dramatically in the last decade.  Within the last two years alone, the United States Department of Justice has held hearings on the appropriate scope of Section 2, issued a comprehensive Report, and then repudiated it; and the European Commission has risen as an aggressive leader in single firm conduct enforcement by bringing abuse of dominance actions and assessing heavy fines against firms including Qualcomm, Intel, and Microsoft.  In the United States, two of the most significant characteristics of the “new” antitrust approach have been a more intense focus on innovative companies in high-tech industries and a weakening of longstanding concerns that erroneous antitrust interventions will hinder economic growth.  But this focus is dangerous, and these concerns should not be dismissed so lightly.  In this article we offer a comprehensive cautionary tale in the context of a detailed factual, legal and economic analysis of the next Microsoft: the theoretical, but perhaps imminent, enforcement action against Google.  Close scrutiny of the complex economics of Google’s technology, market and business practices reveals a range of real but subtle, pro-competitive explanations for features that have been held out instead as anticompetitive.  Application of the relevant case law then reveals a set of concerns where economic complexity and ambiguity, coupled with an insufficiently-deferential approach to innovative technology and pricing practices in the most relevant precedent (the D.C. Circuit’s decision in Microsoft), portend a potentially erroneous—and costly—result.  Our analysis, by contrast, embraces the cautious and evidence-based approach to uncertainty, complexity and dynamic innovation contained within the well-established “error cost framework.”  As we demonstrate, while there is an abundance of error-cost concern in the Supreme Court precedent, there is a real risk that the current, aggressive approach to antitrust error, coupled with the uncertain economics of Google’s innovative conduct, will nevertheless yield a costly intervention.  The point is not that we know that Google’s conduct is procompetitive, but rather that the very uncertainty surrounding it counsels caution, not aggression.