Archives For Internet search

In my article published today in The Daily Signal, I delve into the difficulties of curbing Internet-related copyright infringement.  The key points are summarized below.

U.S. industries that rely on copyright protection (such as motion pictures, music, television, visual arts, and software) are threatened by the unauthorized Internet downloading of copyrighted writings, designs, artwork, music and films. U.S. policymakers must decide how best to protect the creators of copyrighted works without harming growth and innovation in Internet services or vital protections for free speech.

The Internet allows consumers to alter and immediately transmit perfect digital copies of copyrighted works around the world and has generated services designed to provide these tools. Those tools include, for example, peer-to-peer file-sharing services and mobile apps designed to foster infringement. Many websites that provide pirated content—including, for example, online video-streaming sites—are located outside the United States. Such piracy costs the U.S. economy billions of dollars in losses per year—including reduced income for creators and other participants in copyright-intensive industries.

Curtailing online infringement will require a combination of litigation, technology, enhanced private-sector initiatives, public education, and continuing development of readily accessible and legally available content offerings. As the Internet continues to develop, the best approach to protecting copyright in the online environment is to rely on existing legal tools, enhanced cooperation among Internet stakeholders and business innovations that lessen incentives to infringe.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

The precise details underlying the European Commission’s (EC) April 15 Statement of Objections (SO), the EC’s equivalent of an antitrust complaint, against Google, centered on the company’s promotion of its comparison shopping service (CSS), “Google Shopping,” have not yet been made public.  Nevertheless, the EC’s fact sheet describing the theory of the case is most discouraging to anyone who believes in economically sound, consumer welfare-oriented antitrust enforcement.   Put simply, the SO alleges that Google is “abusing its dominant position” in online search services throughout Europe by systematically positioning and prominently displaying its CSS in its general search result pages, “irrespective of its merits,” causing the Google CSS to achieve higher rates of growth than CSSs promoted by rivals.  According to the EC, this behavior “has a negative impact on consumers and innovation”.  Why so?  Because this “means that users do not necessarily see the most relevant shopping results in response to their queries, and that incentives to innovate from rivals are lowered as they know that however good their product, they will not benefit from the same prominence as Google’s product.”  (Emphasis added.)  The EC’s proposed solution?  “Google should treat its own comparison shopping services and those of rivals in the same way.”

The EC’s latest action may represent only “the tip of a Google EC antitrust iceberg,” since the EC has stated that it is continuing to investigate other aspects of Google’s behavior, including Google agreements with respect to the Android operating system, plus “the favourable treatment by Google in its general search results of other specialised search services, and concerns with regard to copying of rivals’ web content (known as ‘scraping’), advertising exclusivity and undue restrictions on advertisers.”  For today, I focus on the tip, leaving consideration of the bulk of the iceberg to future commentaries, as warranted.  (Truth on the Market has addressed Google-related antitrust issues previously — see, for example, here, here, and here.)

The EC’s April 15 Google SO is troublesome in multiple ways.

First, the claim that Google does not “necessarily” array the most relevant search results in a manner desired by consumers appears to be in tension with the findings of an exhaustive U.S. antitrust investigation of the company.  As U.S. Federal Trade Commissioner Josh Wright pointed out in a recent speech, the FTC’s 2013 “closing statement [in its Google investigation] indicates that Google’s so-called search bias did not, in fact, harm consumers; to the contrary, the evidence suggested that ‘Google likely benefited consumers by prominently displaying its vertical content on its search results page.’  The Commission reached this conclusion based upon, among other things, analyses of actual consumer behavior – so-called ‘click through’ data – which showed how consumers reacted to Google’s promotion of its vertical properties.”

Second, even assuming that Google’s search engine practices have weakened competing CSSs, that would not justify EC enforcement action against Google.  As Commissioner Wright also explained, the FTC “accepted arguments made by competing websites that Google’s practices injured them and strengthened Google’s market position, but correctly found that these were not relevant considerations in a proper antitrust analysis focused upon consumer welfare rather than harm to competitors.”  The EC should keep this in mind, given that, as former EC Competition Commissioner Joaquin Almunia emphasized, “[c]onsumer welfare is not just a catchy phrase.  It is the cornerstone, the guiding principle of EU competition policy.”

Third, and perhaps most fundamentally, although EC disclaims an interest in “interfere[ing] with” Google’s search engine algorithm, dictating an “equal treatment of competitors” result implicitly would require intrusive micromanagement of Google’s search engine – a search engine which is at the heart of the company’s success and has bestowed enormous welfare benefits on consumers and producers alike.  There is no reason to believe that EC policing of EC CSS listings to promote an “equal protection of competitors” mandate would result in a search experience that better serves consumers than the current Google policy.  Consistent with this point, in its 2013 Google closing statement, the FTC observed that it lacked the ability to “second-guess” product improvements that plausibly benefit consumers, and it stressed that “condemning legitimate product improvements risks harming consumers.”

Fourth, competing CSSs have every incentive to inform consumers if they believe that Google search results are somehow “inferior” to their offerings.  They are free to advertise and publicize the merits of their services, and third party intermediaries that rate browsers may be expected to report if Google Shopping consistently offers suboptimal consumer services.  In short, “the word will get out.”  Even in the absence of perfect information, consumers can readily at low cost browse alternative CSSs to determine whether they prefer their services to Google’s – “help is only a click away.”

Fifth, the most likely outcome of an EC “victory” in this case would be a reduced incentive for Google to invest in improving its search engine, knowing that its ability to monetize search engine improvements could be compromised by future EC decisions to prevent an improved search engine from harming rivals.  What’s worse, other developers of service platforms and other innovative business improvements would similarly “get the message” that it would not be worth their while to innovate to the point of dominance, because their returns to such innovation would be constrained.  In sum, companies in a wide variety of sectors would have less of an incentive to innovate, and this in turn would lead to reduced welfare gains and benefits to consumers.  This would yield (as the EC’s fact sheet put it) “a negative impact on consumers and innovation”, because companies across industries operating in Europe would know that if their product were too good, they would attract the EC’s attention and be put in their place.  In other words, a successful EC intervention here could spawn the very welfare losses (magnified across sectors) that the Commission cited as justification for reining in Google in the first place!

Finally, it should come as no surprise that a coalition of purveyors of competing search engines and online shopping sites lobbied hard for EC antitrust action against Google.  When government intervenes heavily and often in markets to “correct” perceived “abuses,” private actors have a strong incentive to expend resources on achieving government actions that disadvantage their rivals – resources that could otherwise have been used to compete more vigorously and effectively.  In short, the very existence of expansive regulatory schemes disincentivizes competition on the merits, and in that regard tends to undermine welfare.  Government officials should keep that firmly in mind when private actors urge them to act decisively to “cure” marketplace imperfections by limiting a rival’s freedom of action.

Let us hope that the EC takes these concerns to heart before taking further action against Google.

Recent years have seen an increasing interest in incorporating privacy into antitrust analysis. The FTC and regulators in Europe have rejected these calls so far, but certain scholars and activists continue their attempts to breathe life into this novel concept. Elsewhere we have written at length on the scholarship addressing the issue and found the case for incorporation wanting. Among the errors proponents make is a persistent (and woefully unsubstantiated) assertion that online data can amount to a barrier to entry, insulating incumbent services from competition and ensuring that only the largest providers thrive. This data barrier to entry, it is alleged, can then allow firms with monopoly power to harm consumers, either directly through “bad acts” like price discrimination, or indirectly by raising the costs of advertising, which then get passed on to consumers.

A case in point was on display at last week’s George Mason Law & Economics Center Briefing on Big Data, Privacy, and Antitrust. Building on their growing body of advocacy work, Nathan Newman and Allen Grunes argued that this hypothesized data barrier to entry actually exists, and that it prevents effective competition from search engines and social networks that are interested in offering services with heightened privacy protections.

According to Newman and Grunes, network effects and economies of scale ensure that dominant companies in search and social networking (they specifically named Google and Facebook — implying that they are in separate markets) operate without effective competition. This results in antitrust harm, they assert, because it precludes competition on the non-price factor of privacy protection.

In other words, according to Newman and Grunes, even though Google and Facebook offer their services for a price of $0 and constantly innovate and upgrade their products, consumers are nevertheless harmed because the business models of less-privacy-invasive alternatives are foreclosed by insufficient access to data (an almost self-contradicting and silly narrative for many reasons, including the big question of whether consumers prefer greater privacy protection to free stuff). Without access to, and use of, copious amounts of data, Newman and Grunes argue, the algorithms underlying search and targeted advertising are necessarily less effective and thus the search product without such access is less useful to consumers. And even more importantly to Newman, the value to advertisers of the resulting consumer profiles is diminished.

Newman has put forth a number of other possible antitrust harms that purportedly result from this alleged data barrier to entry, as well. Among these is the increased cost of advertising to those who wish to reach consumers. Presumably this would harm end users who have to pay more for goods and services because the costs of advertising are passed on to them. On top of that, Newman argues that ad networks inherently facilitate price discrimination, an outcome that he asserts amounts to antitrust harm.

FTC Commissioner Maureen Ohlhausen (who also spoke at the George Mason event) recently made the case that antitrust law is not well-suited to handling privacy problems. She argues — convincingly — that competition policy and consumer protection should be kept separate to preserve doctrinal stability. Antitrust law deals with harms to competition through the lens of economic analysis. Consumer protection law is tailored to deal with broader societal harms and aims at protecting the “sanctity” of consumer transactions. Antitrust law can, in theory, deal with privacy as a non-price factor of competition, but this is an uneasy fit because of the difficulties of balancing quality over two dimensions: Privacy may be something some consumers want, but others would prefer a better algorithm for search and social networks, and targeted ads with free content, for instance.

In fact, there is general agreement with Commissioner Ohlhausen on her basic points, even among critics like Newman and Grunes. But, as mentioned above, views diverge over whether there are some privacy harms that should nevertheless factor into competition analysis, and on whether there is in fact  a data barrier to entry that makes these harms possible.

As we explain below, however, the notion of data as an antitrust-relevant barrier to entry is simply a myth. And, because all of the theories of “privacy as an antitrust harm” are essentially predicated on this, they are meritless.

First, data is useful to all industries — this is not some new phenomenon particular to online companies

It bears repeating (because critics seem to forget it in their rush to embrace “online exceptionalism”) that offline retailers also receive substantial benefit from, and greatly benefit consumers by, knowing more about what consumers want and when they want it. Through devices like coupons and loyalty cards (to say nothing of targeted mailing lists and the age-old practice of data mining check-out receipts), brick-and-mortar retailers can track purchase data and better serve consumers. Not only do consumers receive better deals for using them, but retailers know what products to stock and advertise and when and on what products to run sales. For instance:

  • Macy’s analyzes tens of millions of terabytes of data every day to gain insights from social media and store transactions. Over the past three years, the use of big data analytics alone has helped Macy’s boost its revenue growth by 4 percent annually.
  • Following its acquisition of Kosmix in 2011, Walmart established @WalmartLabs, which created its own product search engine for online shoppers. In the first year of its use alone, the number of customers buying a product on Walmart.com after researching a purchase increased by 20 percent. According to Ron Bensen, the vice president of engineering at @WalmartLabs, the combination of in-store and online data could give brick-and-mortar retailers like Walmart an advantage over strictly online stores.
  • Panera and a whole host of restaurants, grocery stores, drug stores and retailers use loyalty cards to advertise and learn about consumer preferences.

And of course there is a host of others uses for data, as well, including security, fraud prevention, product optimization, risk reduction to the insured, knowing what content is most interesting to readers, etc. The importance of data stretches far beyond the online world, and far beyond mere retail uses more generally. To describe even online giants like Amazon, Apple, Microsoft, Facebook and Google as having a monopoly on data is silly.

Second, it’s not the amount of data that leads to success but building a better mousetrap

The value of knowing someone’s birthday, for example, is not in that tidbit itself, but in the fact that you know this is a good day to give that person a present. Most of the data that supports the advertising networks underlying the Internet ecosphere is of this sort: Information is important to companies because of the value that can be drawn from it, not for the inherent value of the data itself. Companies don’t collect information about you to stalk you, but to better provide goods and services to you.

Moreover, data itself is not only less important than what can be drawn from it, but data is also less important than the underlying product it informs. For instance, Snapchat created a challenger to  Facebook so successfully (and in such short time) that Facebook attempted to buy it for $3 billion (Google offered $4 billion). But Facebook’s interest in Snapchat wasn’t about its data. Instead, Snapchat was valuable — and a competitive challenge to Facebook — because it cleverly incorporated the (apparently novel) insight that many people wanted to share information in a more private way.

Relatedly, Twitter, Instagram, LinkedIn, Yelp, Pinterest (and Facebook itself) all started with little (or no) data and they have had a lot of success. Meanwhile, despite its supposed data advantages, Google’s attempts at social networking — Google+ — have never caught up to Facebook in terms of popularity to users (and thus not to advertisers either). And scrappy social network Ello is starting to build a significant base without data collection for advertising at all.

At the same time it’s simply not the case that the alleged data giants — the ones supposedly insulating themselves behind data barriers to entry — actually have the type of data most relevant to startups anyway. As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use — they came up with an idea for a better mousetrap.The data they have accrued came after they innovated, entered the market and mounted their successful challenges — not before.

In reality, those who complain about data facilitating unassailable competitive advantages have it exactly backwards. Companies need to innovate to attract consumer data, otherwise consumers will switch to competitors (including both new entrants and established incumbents). As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results: The continued explosion of new products, services and other apps is evidence that data is not a bottleneck to competition but a spur to drive it.

Third, competition online is one click or thumb swipe away; that is, barriers to entry and switching costs are low

Somehow, in the face of alleged data barriers to entry, competition online continues to soar, with newcomers constantly emerging and triumphing. This suggests that the barriers to entry are not so high as to prevent robust competition.

Again, despite the supposed data-based monopolies of Facebook, Google, Amazon, Apple and others, there exist powerful competitors in the marketplaces they compete in:

  • If consumers want to make a purchase, they are more likely to do their research on Amazon than Google.
  • Google flight search has failed to seriously challenge — let alone displace —  its competitors, as critics feared. Kayak, Expedia and the like remain the most prominent travel search sites — despite Google having literally purchased ITA’s trove of flight data and data-processing acumen.
  • People looking for local reviews go to Yelp and TripAdvisor (and, increasingly, Facebook) as often as Google.
  • Pinterest, one of the most highly valued startups today, is now a serious challenger to traditional search engines when people want to discover new products.
  • With its recent acquisition of the shopping search engine, TheFind, and test-run of a “buy” button, Facebook is also gearing up to become a major competitor in the realm of e-commerce, challenging Amazon.
  • Likewise, Amazon recently launched its own ad network, “Amazon Sponsored Links,” to challenge other advertising players.

Even assuming for the sake of argument that data creates a barrier to entry, there is little evidence that consumers cannot easily switch to a competitor. While there are sometimes network effects online, like with social networking, history still shows that people will switch. MySpace was considered a dominant network until it made a series of bad business decisions and everyone ended up on Facebook instead. Similarly, Internet users can and do use Bing, DuckDuckGo, Yahoo, and a plethora of more specialized search engines on top of and instead of Google. And don’t forget that Google itself was once an upstart new entrant that replaced once-household names like Yahoo and AltaVista.

Fourth, access to data is not exclusive

Critics like Newman have compared Google to Standard Oil and argued that government authorities need to step in to limit Google’s control over data. But to say data is like oil is a complete misnomer. If Exxon drills and extracts oil from the ground, that oil is no longer available to BP. Data is not finite in the same way. To use an earlier example, Google knowing my birthday doesn’t limit the ability of Facebook to know my birthday, as well. While databases may be proprietary, the underlying data is not. And what matters more than the data itself is how well it is analyzed.

This is especially important when discussing data online, where multi-homing is ubiquitous, meaning many competitors end up voluntarily sharing access to data. For instance, I can use the friend-finder feature on WordPress to find Facebook friends, Google connections, and people I’m following on Twitter who also use the site for blogging. Using this feature allows WordPress to access your contact list on these major online players.

Friend-Finder

Further, it is not apparent that Google’s competitors have less data available to them. Microsoft, for instance, has admitted that it may actually have more data. And, importantly for this discussion, Microsoft may have actually garnered some of its data for Bing from Google.

If Google has a high cost per click, then perhaps it’s because it is worth it to advertisers: There are more eyes on Google because of its superior search product. Contra Newman and Grunes, Google may just be more popular for consumers and advertisers alike because the algorithm makes it more useful, not because it has more data than everyone else.

Fifth, the data barrier to entry argument does not have workable antitrust remedies

The misguided logic of data barrier to entry arguments leaves a lot of questions unanswered. Perhaps most important among these is the question of remedies. What remedy would apply to a company found guilty of leveraging its market power with data?

It’s actually quite difficult to conceive of a practical means for a competition authority to craft remedies that would address the stated concerns without imposing enormous social costs. In the unilateral conduct context, the most obvious remedy would involve the forced sharing of data.

On the one hand, as we’ve noted, it’s not clear this would actually accomplish much. If competitors can’t actually make good use of data, simply having more of it isn’t going to change things. At the same time, such a result would reduce the incentive to build data networks to begin with. In their startup stage, companies like Uber and Facebook required several months and hundreds of thousands, if not millions, of dollars to design and develop just the first iteration of the products consumers love. Would any of them have done it if they had to share their insights? In fact, it may well be that access to these free insights is what competitors actually want; it’s not the data they’re lacking, but the vision or engineering acumen to use it.

Other remedies limiting collection and use of data are not only outside of the normal scope of antitrust remedies, they would also involve extremely costly court supervision and may entail problematic “collisions between new technologies and privacy rights,” as the last year’s White House Report on Big Data and Privacy put it.

It is equally unclear what an antitrust enforcer could do in the merger context. As Commissioner Ohlhausen has argued, blocking specific transactions does not necessarily stop data transfer or promote privacy interests. Parties could simply house data in a standalone entity and enter into licensing arrangements. And conditioning transactions with forced data sharing requirements would lead to the same problems described above.

If antitrust doesn’t provide a remedy, then it is not clear why it should apply at all. The absence of workable remedies is in fact a strong indication that data and privacy issues are not suitable for antitrust. Instead, such concerns would be better dealt with under consumer protection law or by targeted legislation.

The Wall Street Journal reported yesterday that the FTC Bureau of Competition staff report to the commissioners in the Google antitrust investigation recommended that the Commission approve an antitrust suit against the company.

While this is excellent fodder for a few hours of Twitter hysteria, it takes more than 140 characters to delve into the nuances of a 20-month federal investigation. And the bottom line is, frankly, pretty ho-hum.

As I said recently,

One of life’s unfortunate certainties, as predictable as death and taxes, is this: regulators regulate.

The Bureau of Competition staff is made up of professional lawyers — many of them litigators, whose existence is predicated on there being actual, you know, litigation. If you believe in human fallibility at all, you have to expect that, when they err, FTC staff errs on the side of too much, rather than too little, enforcement.

So is it shocking that the FTC staff might recommend that the Commission undertake what would undoubtedly have been one of the agency’s most significant antitrust cases? Hardly.

Nor is it surprising that the commissioners might not always agree with staff. In fact, staff recommendations are ignored all the time, for better or worse. Here are just a few examples: R.J Reynolds/Brown & Williamson merger, POM Wonderful , Home Shopping Network/QVC merger, cigarette advertising. No doubt there are many, many more.

Regardless, it also bears pointing out that the staff did not recommend the FTC bring suit on the central issue of search bias “because of the strong procompetitive justifications Google has set forth”:

Complainants allege that Google’s conduct is anticompetitive because if forecloses alternative search platforms that might operate to constrain Google’s dominance in search and search advertising. Although it is a close call, we do not recommend that the Commission issue a complaint against Google for this conduct.

But this caveat is enormous. To report this as the FTC staff recommending a case is seriously misleading. Here they are forbearing from bringing 99% of the case against Google, and recommending suit on the marginal 1% issues. It would be more accurate to say, “FTC staff recommends no case against Google, except on a couple of minor issues which will be immediately settled.”

And in fact it was on just these minor issues that Google agreed to voluntary commitments to curtail some conduct when the FTC announced it was not bringing suit against the company.

The Wall Street Journal quotes some other language from the staff report bolstering the conclusion that this is a complex market, the conduct at issue was ambiguous (at worst), and supporting the central recommendation not to sue:

We are faced with a set of facts that can most plausibly be accounted for by a narrative of mixed motives: one in which Google’s course of conduct was premised on its desire to innovate and to produce a high quality search product in the face of competition, blended with the desire to direct users to its own vertical offerings (instead of those of rivals) so as to increase its own revenues. Indeed, the evidence paints a complex portrait of a company working toward an overall goal of maintaining its market share by providing the best user experience, while simultaneously engaging in tactics that resulted in harm to many vertical competitors, and likely helped to entrench Google’s monopoly power over search and search advertising.

On a global level, the record will permit Google to show substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.

This is exactly when you want antitrust enforcers to forbear. Predicting anticompetitive effects is difficult, and conduct that could be problematic is simultaneously potentially vigorous competition.

That the staff concluded that some of what Google was doing “harmed competitors” isn’t surprising — there were lots of competitors parading through the FTC on a daily basis claiming Google harmed them. But antitrust is about protecting consumers, not competitors. Far more important is the staff finding of “substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.”

Indeed, the combination of “substantial innovation,” “intense competition from Microsoft and others,” and “Google’s strong procompetitive justifications” suggests a well-functioning market. It similarly suggests an antitrust case that the FTC would likely have lost. The FTC’s litigators should probably be grateful that the commissioners had the good sense to vote to close the investigation.

Meanwhile, the Wall Street Journal also reports that the FTC’s Bureau of Economics simultaneously recommended that the Commission not bring suit at all against Google. It is not uncommon for the lawyers and the economists at the Commission to disagree. And as a general (though not inviolable) rule, we should be happy when the Commissioners side with the economists.

While the press, professional Google critics, and the company’s competitors may want to make this sound like a big deal, the actual facts of the case and a pretty simple error-cost analysis suggests that not bringing a case was the correct course.

The suit against Google was to be this century’s first major antitrust case and a model for high technology industries in the future. Now that we have passed the investigative hangover, the mood has turned reflective, and antitrust experts are now looking to place this case into its proper context. If it were brought, would the case have been on sure legal footing? Was this a prudent move for consumers? Was the FTC’s disposition of the case appropriate?

Join me this Friday, January 11, 2013 at 12:00 pm – 1:45 pm ET for an ABA Antitrust Section webinar to explore these questions, among others. I will be sharing the panel with an impressive group:

Hill B. Welford will moderate. Registration is open to everyone here and the outlay is zero. Remember — these events are not technically free because you have to give up some of your time, but I would be delighted if you did.

The Federal Trade Commission yesterday closed its investigation of Google’s search business (see my comment here) without taking action. The FTC did, however, enter into a settlement with Google over the licensing of Motorola Mobility’s standards-essential patents (SEPs). The FTC intends that agreement to impose some limits on an area of great complexity and vigorous debate among industry, patent experts and global standards bodies: The allowable process for enforcing FRAND (fair, reasonable and non-discriminatory) licensing of SEPs, particularly the use of injunctions by patent holders to do so. According to Chairman Leibowitz, “[t]oday’s landmark enforcement action will set a template for resolution of SEP licensing disputes across many industries.” That effort may or may not be successful. It also may be misguided.

In general, a FRAND commitment incentivizes innovation by allowing a SEP owner to recoup its investments and the value of its technology through licensing, while, at the same, promoting competition and avoiding patent holdup by ensuring that licensing agreements are reasonable. When the process works, and patent holders negotiate licensing rights in good faith, patents are licensed, industries advance and consumers benefit.

FRAND terms are inherently indeterminate and flexible—indeed, they often apply precisely in situations where licensors and licensees need flexibility because each licensing circumstance is nuanced and a one-size-fits-all approach isn’t workable. Superimposing process restraints from above isn’t necessarily the best thing in dealing with what amounts to a contract dispute. But few can doubt the benefits of greater clarity in this process; the question is whether the FTC’s particular approach to the problem sacrifices too much in exchange for such clarity.

The crux of the issue in the Google consent decree—and the most controversial aspect of SEP licensing negotiations—is the role of injunctions. The consent decree requires that, before Google sues to enjoin a manufacturer from using its SEPs without a license, the company must follow a prescribed path in licensing negotiations. In particular:

Under this Order, before seeking an injunction on FRAND-encumbered SEPs, Google must: (1) provide a potential licensee with a written offer containing all of the material license terms necessary to license its SEPs, and (2) provide a potential licensee with an offer of binding arbitration to determine the terms of a license that are not agreed upon. Furthermore, if a potential licensee seeks judicial relief for a FRAND determination, Google must not seek an injunction during the pendency of the proceeding, including appeals.

There are a few exceptions, summarized by Commissioner Ohlhausen:

These limitations include when the potential licensee (a) is outside the jurisdiction of the United States; (b) has stated in writing or sworn testimony that it will not license the SEP on any terms [in other words, is not a “willing licensee”]; (c) refuses to enter a license agreement on terms set in a final ruling of a court – which includes any appeals – or binding arbitration; or (d) fails to provide written confirmation to a SEP owner after receipt of a terms letter in the form specified by the Commission. They also include certain instances when a potential licensee has brought its own action seeking injunctive relief on its FRAND-encumbered SEPs.

To the extent that the settlement reinforces what Google (and other licensors) would do anyway, and even to the extent that it imposes nothing more than an obligation to inject a neutral third party into FRAND negotiations to assist the parties in resolving rate disputes, there is little to complain about. Indeed, this is the core of the agreement, and, importantly, it seems to preserve Google’s right to seek injunctions to enforce its patents, subject to the agreement’s process requirements.

Industry participants and standard-setting organizations have supported injunctions, and the seeking and obtaining of injunctions against infringers is not in conflict with SEP patentees’ obligations. Even the FTC, in its public comments, has stated that patent owners should be able to obtain injunctions on SEPs when an infringer has rejected a reasonable license offer. Thus, the long-anticipated announcement by the FTC in the Google case may help to provide some clarity to the future negotiation of SEP licenses, the possible use of binding arbitration, and the conditions under which seeking injunctive relief will be permissible (as an antitrust matter).

Nevertheless, U.S. regulators, including the FTC, have sometimes opined that seeking injunctions on products that infringe SEPs is not in the spirit of FRAND. Everyone seems to agree that more certainty is preferable; the real issue is whether and when injunctions further that aim or not (and whether and when they are anticompetitive).

In October, Renata Hesse, then Acting Assistant Attorney General for the Department of Justice’s Antitrust Division, remarked during a patent roundtable that

[I]t would seem appropriate to limit a patent holder’s right to seek an injunction to situations where the standards implementer is unwilling to have a neutral third-party determine the appropriate F/RAND terms or is unwilling to accept the F/RAND terms approved by such a third-party.

In its own 2011 Report on the “IP Marketplace,” the FTC acknowledged the fluidity and ambiguity surrounding the meaning of “reasonable” licensing terms and the problems of patent enforcement. While noting that injunctions may confer a costly “hold-up” power on licensors that wield them, the FTC nevertheless acknowledged the important role of injunctions in preserving the value of patents and in encouraging efficient private negotiation:

Three characteristics of injunctions that affect innovation support generally granting an injunction. The first and most fundamental is an injunction’s ability to preserve the exclusivity that provides the foundation of the patent system’s incentives to innovate. Second, the credible threat of an injunction deters infringement in the first place. This results from the serious consequences of an injunction for an infringer, including the loss of sunk investment. Third, a predictable injunction threat will promote licensing by the parties. Private contracting is generally preferable to a compulsory licensing regime because the parties will have better information about the appropriate terms of a license than would a court, and more flexibility in fashioning efficient agreements.

* * *

But denying an injunction every time an infringer’s switching costs exceed the economic value of the invention would dramatically undermine the ability of a patent to deter infringement and encourage innovation. For this reason, courts should grant injunctions in the majority of cases.…

Consistent with this view, the European Commission’s Deputy Director-General for Antitrust, Cecilio Madero Villarejo, recently expressed concern that some technology companies that complain of being denied a license on FRAND terms never truly intend to acquire licenses, but rather “want[] to create conditions for a competition case to be brought.”

But with the Google case, the Commission appears to back away from its seeming support for injunctions, claiming that:

Seeking and threatening injunctions against willing licensees of FRAND-encumbered SEPs undermines the integrity and efficiency of the standard-setting process and decreases the incentives to participate in the process and implement published standards. Such conduct reduces the value of standard setting, as firms will be less likely to rely on the standard-setting process.

Reconciling the FTC’s seemingly disparate views turns on the question of what a “willing licensee” is. And while the Google settlement itself may not magnify the problems surrounding the definition of that term, it doesn’t provide any additional clarity, either.

The problem is that, even in its 2011 Report, in which FTC noted the importance of injunctions, it defines a willing licensee as one who would license at a hypothetical, ex ante rate absent the threat of an injunction and with a different risk profile than an after-the-fact infringer. In other words, the FTC’s definition of willing licensee assumes a willingness to license only at a rate determined when an injunction is not available, and under the unrealistic assumption that the true value of a SEP can be known ex ante. Not surprisingly, then, the Commission finds it easy to declare an injunction invalid when a patentee demands a (higher) royalty rate in an actual negotiation, with actual knowledge of a patent’s value and under threat of an injunction.

As Richard Epstein, Scott Kieff and Dan Spulber discuss in critiquing the FTC’s 2011 Report:

In short, there is no economic basis to equate a manufacturer that is willing to commit to license terms before the adoption and launch of a standard, with one that instead expropriates patent rights at a later time through infringement. The two bear different risks and the late infringer should not pay the same low royalty as a party that sat down at the bargaining table and may actually have contributed to the value of the patent through its early activities. There is no economically meaningful sense in which any royalty set higher than that which a “willing licensee would have paid” at the pre-standardization moment somehow “overcompensates patentees by awarding more than the economic value of the patent.”

* * *

Even with a RAND commitment, the patent owner retains the valuable right to exclude (not merely receive later compensation from) manufacturers who are unwilling to accept reasonable license terms. Indeed, the right to exclude influences how those terms should be calculated, because it is quite likely that prior licensees in at least some areas will pay less if larger numbers of parties are allowed to use the same technology. Those interactive effects are ignored in the FTC calculations.

With this circular logic, all efforts by patentees to negotiate royalty rates after infringement has occurred can be effectively rendered anticompetitive if the patentee uses an injunction or the threat of an injunction against the infringer to secure its reasonable royalty.

The idea behind FRAND is rather simple (reward inventors; protect competition), but the practice of SEP licensing is much more complicated. Circumstances differ from case to case, and, more importantly, so do the parties’ views on what may constitute an appropriate licensing rate under FRAND. As I have written elsewhere, a single company may have very different views on the meaning of FRAND depending on whether it is the licensor or licensee in a given negotiation—and depending on whether it has already implemented a standard or not. As one court looking at the very SEPs at issue in the Google case has pointed out:

[T]he court is mindful that at the time of an initial offer, it is difficult for the offeror to know what would in fact constitute RAND terms for the offeree. Thus, what may appear to be RAND terms from the offeror’s perspective may be rejected out-of-pocket as non-RAND terms by the offeree. Indeed, it would appear that at any point in the negotiation process, the parties may have a genuine disagreement as to what terms and conditions of a license constitute RAND under the parties’ unique circumstances.

The fact that many firms engaged in SEP negotiations are simultaneously and repeatedly both licensors and licensees of patents governed by multiple SSOs further complicates the process—but also helps to ensure that it will reach a conclusion that promotes innovation and ensures that consumers reap the rewards.

In fact, an important issue in assessing the propriety of injunctions is the recognition that, in most cases, firms would rather license their patents and receive royalties than exclude access to their IP and receive no compensation (and incur the costs of protracted litigation, to boot). Importantly, for firms that both license out their own patents and license in those held by other firms (the majority of IT firms and certainly the norm for firms participating in SSOs), continued interactions on both sides of such deals help to ensure that licensing—not withholding—is the norm.

Companies are waging the smartphone patent wars with very different track records on SSO participation. Apple, for example, is relatively new to the mobile communications space and has relatively few SEPs, while other firms, like Samsung, are long-time players in the space with histories of extensive licensing (in both directions). But, current posturing aside, both firms have an incentive to license their patents, as Mark Summerfield notes:

Apple’s best course of action will most likely be to enter into licensing agreements with its competitors, which will not only result in significant revenues, but also push up the prices (or reduce the margins) on competitive products.

While some commentators make it sound as if injunctions threaten to cripple smartphone makers by preventing them from licensing essential technology on viable terms, companies in this space have been perfectly capable of orchestrating large-scale patent licensing campaigns. That these may increase costs to competitors is a feature—not a bug—of the system, representing the return on innovation that patents are intended to secure. Microsoft has wielded its sizeable patent portfolio to drive up the licensing fees paid by Android device manufacturers, and some commentators have even speculated that Microsoft makes more revenue from Android than Google does. But while Microsoft might prefer to kill Android with its patents, given the unlikeliness of this, as MG Siegler notes,

[T]he next best option is to catch a free ride on the Android train. Patent licensing deals already in place with HTC, General Dynamics, and others could mean revenues of over $1 billion by next year, as Forbes reports. And if they’re able to convince Samsung to sign one as well (which could effectively force every Android partner to sign one), we could be talking multiple billions of dollars of revenue each year.

Hand-wringing about patents is the norm, but so is licensing, and your smartphone exists, despite the thousands of patents that read on it, because the firms that hold those patents—some SEPs and some not—have, in fact, agreed to license them.

The inability to seek an injunction against an infringer, however, would ensure instead that patentees operate with reduced incentives to invest in technology and to enter into standards because they are precluded from benefiting from any subsequent increase in the value of their patents once they do so. As Epstein, Kieff and Spulber write:

The simple reality is that before a standard is set, it just is not clear whether a patent might become more or less valuable. Some upward pressure on value may be created later to the extent that the patent is important to a standard that is important to the market. In addition, some downward pressure may be caused by a later RAND commitment or some other factor, such as repeat play. The FTC seems to want to give manufacturers all of the benefits of both of these dynamic effects by in effect giving the manufacturer the free option of picking different focal points for elements of the damages calculations. The patentee is forced to surrender all of the benefit of the upward pressure while the manufacturer is allowed to get all of the benefit of the downward pressure.

Thus the problem with even the limited constraints imposed by the Google settlement: To the extent that the FTC’s settlement amounts to a prohibition on Google seeking injunctions against infringers unless the company accepts the infringer’s definition of “reasonable,” the settlement will harm the industry. It will reinforce a precedent that will likely reduce the incentives for companies and individuals to innovate, to participate in SSOs, and to negotiate in good faith.

Contrary to most assumptions about the patent system, it needs stronger, not weaker, property rules. With a no-injunction rule (whether explicit or de facto (as the Google settlement’s definition of “willing licensee” unfolds)), a potential licensee has little incentive to negotiate with a patent holder and can instead refuse to license, infringe, try its hand in court, avoid royalties entirely until litigation is finished (and sometimes even longer), and, in the end, never be forced to pay a higher royalty than it would have if it had negotiated before the true value of the patents was known.

Flooding the courts and discouraging innovation and peaceful negotiations hardly seem like benefits to the patent system or the market. Unfortunately, the FTC’s approach to SEP licensing exemplified by the Google settlement may do just that. Continue Reading…

I have been a critic of the Federal Trade Commission’s investigation into Google since it was a gleam in its competitors’ eyes—skeptical that there was any basis for a case, and concerned about the effect on consumers, innovation and investment if a case were brought.

While it took the Commission more than a year and a half to finally come to the same conclusion, ultimately the FTC had no choice but to close the case that was a “square peg, round hole” problem from the start.

Now that the FTC’s investigation has concluded, an examination of the nature of the markets in which Google operates illustrates why this crusade was ill-conceived from the start. In short, the “realities on the ground” strongly challenged the logic and relevance of many of the claims put forth by Google’s critics. Nevertheless, the politics are such that their nonsensical claims continue, in different forums, with competitors continuing to hope that they can wrangle a regulatory solution to their competitive problem.

The case against Google rested on certain assumptions about the functioning of the markets in which Google operates. Because these are tech markets, constantly evolving and complex, most assumptions about the scope of these markets and competitive effects within them are imperfect at best. But there are some attributes of Google’s markets—conveniently left out of the critics’ complaints— that, properly understood, painted a picture for the FTC that undermined the basic, essential elements of an antitrust case against the company.

That case was seriously undermined by the nature and extent of competition in the markets the FTC was investigating. Most importantly, casual references to a “search market” and “search advertising market” aside, Google actually competes in the market for targeted eyeballs: a market aimed to offer up targeted ads to interested users. Search offers a valuable opportunity for targeting an advertiser’s message, but it is by no means alone: there are myriad (and growing) other mechanisms to access consumers online.

Consumers use Google because they are looking for information — but there are lots of ways to do that. There are plenty of apps that circumvent Google, and consumers are increasingly going to specialized sites to find what they are looking for. The search market, if a distinct one ever existed, has evolved into an online information market that includes far more players than those who just operate traditional search engines.

We live in a world where what prevails today won’t prevail tomorrow. The tech industry is constantly changing, and it is the height of folly (and a serious threat to innovation and consumer welfare) to constrain the activities of firms competing in such an environment by pigeonholing the market. In other words, in a proper market, Google looks significantly less dominant. More important, perhaps, as search itself evolves, and as Facebook, Amazon and others get into the search advertising game, Google’s strong position even in the overly narrow “search market” is far from unassailable.

This is progress — creative destruction — not regress, and such changes should not be penalized.

Another common refrain from Google’s critics was that Google’s access to immense amounts of data used to increase the quality of its targeting presented a barrier to competition that no one else could match, thus protecting Google’s unassailable monopoly. But scale comes in lots of ways.

Even if scale doesn’t come cheaply, the fact that challenging firms might have to spend the same (or, in this case, almost certainly less) Google did in order to replicate its success is not a “barrier to entry” that requires an antitrust remedy. Data about consumer interests is widely available (despite efforts to reduce the availability of such data in the name of protecting “privacy”—which might actually create barriers to entry). It’s never been the case that a firm has to generate its own inputs for every product it produces — and there’s no reason to suggest search or advertising is any different.

Additionally, to defend a claim of monopolization, it is generally required to show that the alleged monopolist enjoys protection from competition through barriers to entry. In Google’s case, the barriers alleged were illusory. Bing and other recent entrants in the general search business have enjoyed success precisely because they were able to obtain the inputs (in this case, data) necessary to develop competitive offerings.

Meanwhile unanticipated competitors like Facebook, Amazon, Twitter and others continue to knock at Google’s metaphorical door, all of them entering into competition with Google using data sourced from creative sources, and all of them potentially besting Google in the process. Consider, for example, Amazon’s recent move into the targeted advertising market, competing with Google to place ads on websites across the Internet, but with the considerable advantage of being able to target ads based on searches, or purchases, a user has made on Amazon—the world’s largest product search engine.

Now that the investigation has concluded, we come away with two major findings. First, the online information market is dynamic, and it is a fool’s errand to identify the power or significance of any player in these markets based on data available today — data that is already out of date between the time it is collected and the time it is analyzed.

Second, each development in the market – whether offered by Google or its competitors and whether facilitated by technological change or shifting consumer preferences – has presented different, novel and shifting opportunities and challenges for companies interested in attracting eyeballs, selling ad space and data, earning revenue and obtaining market share. To say that Google dominates “search” or “online advertising” missed the mark precisely because there was simply nothing especially antitrust-relevant about either search or online advertising. Because of their own unique products, innovations, data sources, business models, entrepreneurship and organizations, all of these companies have challenged and will continue to challenge the dominant company — and the dominant paradigm — in a shifting and evolving range of markets.

It would be churlish not to give credit where credit is due—and credit is due the FTC. I continue to think the investigation should have ended before it began, of course, but the FTC is to be commended for reaching this result amidst an overwhelming barrage of pressure to “do something.”

But there are others in this sadly politicized mess for whom neither the facts nor the FTC’s extensive investigation process (nor the finer points of antitrust law) are enough. Like my four-year-old daughter, they just “want what they want,” and they will stamp their feet until they get it.

While competitors will be competitors—using the regulatory system to accomplish what they can’t in the market—they do a great disservice to the very customers they purport to be protecting in doing so. As Milton Friedman famously said, in decrying “The Business Community’s Suicidal Impulse“:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

I do blame businessmen when, in their political activities, individual businessmen and their organizations take positions that are not in their own self-interest and that have the effect of undermining support for free private enterprise. In that respect, businessmen tend to be schizophrenic. When it comes to their own businesses, they look a long time ahead, thinking of what the business is going to be like 5 to 10 years from now. But when they get into the public sphere and start going into the problems of politics, they tend to be very shortsighted.

Ironically, Friedman was writing about the antitrust persecution of Microsoft by its rivals back in 1999:

Is it really in the self-interest of Silicon Valley to set the government on Microsoft? Your industry, the computer industry, moves so much more rapidly than the legal process, that by the time this suit is over, who knows what the shape of the industry will be.… [Y]ou will rue the day when you called in the government.

Among Microsoft’s chief tormentors was Gary Reback. He’s spent the last few years beating the drum against Google—but singing from the same song book. Reback recently told the Washington Post, “if a settlement were to be proposed that didn’t include search, the institutional integrity of the FTC would be at issue.” Actually, no it wouldn’t. As a matter of fact, the opposite is true. It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search. Doing so would at least raise the possibility that it were doing so because of pressure and not the merits of the case. But not doing so in the face of such pressure? That can almost only be a function of institutional integrity.

As another of Google’s most-outspoken critics, Tom Barnett, noted:

[The FTC has] really put [itself] in the position where they are better positioned now than any other agency in the U.S. is likely to be in the immediate future to address these issues. I would encourage them to take the issues as seriously as they can. To the extent that they concur that Google has violated the law, there are very good reasons to try to address the concerns as quickly as possible.

As Barnett acknowledges, there is no question that the FTC investigated these issues more fully than anyone. The agency’s institutional culture and its committed personnel, together with political pressure, media publicity and endless competitor entreaties, virtually ensured that the FTC took the issues “as seriously as they [could]” – in fact, as seriously as anyone else in the world. There is simply no reasonable way to criticize the FTC for being insufficiently thorough in its investigation and conclusions.

Nor is there a basis for claiming that the FTC is “standing in the way” of the courts’ ability to review the issue, as Scott Cleland contends in an op-ed in the Hill. Frankly, this is absurd. Google’s competitors have spent millions pressuring the FTC to bring a case. But the FTC isn’t remotely the only path to the courts. As Commissioner Rosch admonished,

They can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Competitors have already beaten a path to the DOJ’s door, and investigations are still pending in the EU, Argentina, several US states, and elsewhere. That the agency that has leveled the fullest and best-informed investigation has concluded that there is no “there” there should give these authorities pause, but, sadly for consumers who would benefit from an end to competitors’ rent seeking, nothing the FTC has done actually prevents courts or other regulators from having a crack at Google.

The case against Google has received more attention from the FTC than the merits of the case ever warranted. It is time for Google’s critics and competitors to move on.

[Crossposted at Forbes.com]

Pretty interesting interview with Google’s Senior VP Amit Singhal on where search technology is headed.  In the article, Singhal describes the shift from a content-based, keyword index  to incorporating links and other signals to improve query results.  The most interesting part of the interview is about what is next.

Google now wants to transform words that appear on a page into entities that mean something and have related attributes. It’s what the human brain does naturally, but for computers, it’s known as Artificial Intelligence.

It’s a challenging task, but the work has already begun. Google is “building a huge, in-house understanding of what an entity is and a repository of what entities are in the world and what should you know about those entities,” said Singhal.

In 2010, Google purchased Freebase, a community-built knowledge base packed with some 12 million canonical entities. Twelve million is a good start, but Google has, according to Singhal, invested dramatically to “build a huge knowledge graph of interconnected entities and their attributes.”

The transition from a word-based index to this knowledge graph is a fundamental shift that will radically increase power and complexity. Singhal explained that the word index is essentially like the index you find at the back of a book: “A knowledge base is huge compared to the word index and far more refined or advanced.”

Right now Google is, Singhal told me, building the infrastructure for the more algorithmically complex search of tomorrow, and that task, of course, does include more computers. All those computers are helping the search giant build out the knowledge graph, which now has “north of 200 million entities.” What can you do with that kind of knowledge graph (or base)?

Initially, you just take baby steps. Although evidence of this AI-like intelligence is beginning to show up in Google Search results, most people probably haven’t even noticed it.

For example:

Type “Monet” into Google Search, for instance, and, along with the standard results, you’ll find a small area at the bottom: “Artwork Searches for Claude Monet.” In it are thumbnail results of the top five or six works by the master. Singhal says this is an indication that Google search is beginning to understand that Monet is a painter and that the most important thing about an artist is his greatest works.

When I note that this does not seem wildly different or more exceptional that the traditional results above, Singhal cautioned me that judging the knowledge graph’s power on this would be like judging an artist on work he did as a 12- or 24-month-old.

Check out the whole article.  Counterfactuals are always difficult — but its difficult to imagine a basis for arguments that the evolution of search technology would have been — or will be — better for consumers with government regulation.

By Berin Szoka, Geoffrey Manne & Ryan Radia

As has become customary with just about every new product announcement by Google these days, the company’s introduction on Tuesday of its new “Search, plus Your World” (SPYW) program, which aims to incorporate a user’s Google+ content into her organic search results, has met with cries of antitrust foul play. All the usual blustering and speculation in the latest Google antitrust debate has obscured what should, however, be the two key prior questions: (1) Did Google violate the antitrust laws by not including data from Facebook, Twitter and other social networks in its new SPYW program alongside Google+ content; and (2) How might antitrust restrain Google in conditioning participation in this program in the future?

The answer to the first is a clear no. The second is more complicated—but also purely speculative at this point, especially because it’s not even clear Facebook and Twitter really want to be included or what their price and conditions for doing so would be. So in short, it’s hard to see what there is to argue about yet.

Let’s consider both questions in turn.

Should Google Have Included Other Services Prior to SPYW’s Launch?

Google says it’s happy to add non-Google content to SPYW but, as Google fellow Amit Singhal told Danny Sullivan, a leading search engine journalist:

Facebook and Twitter and other services, basically, their terms of service don’t allow us to crawl them deeply and store things. Google+ is the only [network] that provides such a persistent service,… Of course, going forward, if others were willing to change, we’d look at designing things to see how it would work.

In a follow-up story, Sullivan quotes his interview with Google executive chairman Eric Schmidt about how this would work:

“To start with, we would have a conversation with them,” Schmidt said, about settling any differences.

I replied that with the Google+ suggestions now hitting Google, there was no need to have any discussions or formal deals. Google’s regular crawling, allowed by both Twitter and Facebook, was a form of “automated conversation” giving Google material it could use.

“Anything we do with companies like that, it’s always better to have a conversion,” Schmidt said.

MG Siegler calls this “doublespeak” and seems to think Google violated the antitrust laws by not making SPYW more inclusive right out of the gate. He insists Google didn’t need permission to include public data in SPYW:

Both Twitter and Facebook have data that is available to the public. It’s data that Google crawls. It’s data that Google even has some social context for thanks to older Google Profile features, as Sullivan points out.

It’s not all the data inside the walls of Twitter and Facebook — hence the need for firehose deals. But the data Google can get is more than enough for many of the high level features of Search+ — like the “People and Places” box, for example.

It’s certainly true that if you search Google for “site:twitter.com” or “site:facebook.com,” you’ll get billions of search results from publicly-available Facebook and Twitter pages, and that Google already has some friend connection data via social accounts you might have linked to your Google profile (check out this dashboard), as Sullivan notes. But the public data isn’t available in real-time, and the private, social connection data is limited and available only for users who link their accounts. For Google to access real-time results and full social connection data would require… you guessed it… permission from Twitter (or Facebook)! As it happens, Twitter and Google had a deal for a “data firehose” so that Google could display tweets in real-time under the “personalized search” program for public social information that SPYW builds on top of. But Twitter ended the deal last May for reasons neither company has explained.

At best, therefore, Google might have included public, relatively stale social information from Twitter and Facebook in SPYW—content that is, in any case, already included in basic search results and remains available there. The real question, however, isn’t could Google have included this data in SPYW, but rather need they have? If Google’s engineers and executives decided that the incorporation of this limited data would present an inconsistent user experience or otherwise diminish its uniquely new social search experience, it’s hard to fault the company for deciding to exclude it. Moreover, as an antitrust matter, both the economics and the law of anticompetitive product design are uncertain. In general, as with issues surrounding the vertical integration claims against Google, product design that hurts rivals can (it should be self-evident) be quite beneficial for consumers. Here, it’s difficult to see how the exclusion of non-Google+ social media from SPYW could raise the costs of Google’s rivals, result in anticompetitive foreclosure, retard rivals’ incentives for innovation, or otherwise result in anticompetitive effects (as required to establish an antitrust claim).

Further, it’s easy to see why Google’s lawyers would prefer express permission from competitors before using their content in this way. After all, Google was denounced last year for “scraping” a different type of social content, user reviews, most notably by Yelp’s CEO at the contentious Senate antitrust hearing in September. Perhaps one could distinguish that situation from this one, but it’s not obvious where to draw the line between content Google has a duty to include without “making excuses” about needing permission and content Google has a duty not to include without express permission. Indeed, this seems like a case of “damned if you do, damned if you don’t.” It seems only natural for Google to be gun-shy about “scraping” other services’ public content for use in its latest search innovation without at least first conducting, as Eric Schmidt puts it, a “conversation.”

And as we noted, integrating non-public content would require not just permission but active coordination about implementation. SPYW displays Google+ content only to users who are logged into their Google+ account. Similarly, to display content shared with a user’s friends (but not the world) on Facebook, or protected tweets, Google would need a feed of that private data and a way of logging the user into his or her account on those sites.

Now, if Twitter truly wants Google to feature tweets in Google’s personalized search results, why did Twitter end its agreement with Google last year? Google responded to Twitter’s criticism of its SPYW launch last night with a short Google+ statement:

We are a bit surprised by Twitter’s comments about Search plus Your World, because they chose not to renew their agreement with us last summer, and since then we have observed their rel=nofollow instructions [by removing Twitter content results from “personalized search” results].

Perhaps Twitter simply got a better deal: Microsoft may have paid Twitter $30 million last year for a similar deal allowing Bing users to receive Twitter results. If Twitter really is playing hardball, Google is not guilty of discriminating against Facebook and Twitter in favor of its own social platform. Rather, it’s simply unwilling to pony up the cash that Facebook and Twitter are demanding—and there’s nothing illegal about that.

Indeed, the issue may go beyond a simple pricing dispute. If you were CEO of Twitter or Facebook, would you really think it was a net-win if your users could use Google search as an interface for your site? After all, these social networking sites are in an intense war for eyeballs: the more time users spend on Google, the more ads Google can sell, to the detriment of Facebook or Twitter. Facebook probably sees itself increasingly in direct competition with Google as a tool for finding information. Its social network has vastly more users than Google+ (800 million v 62 million, but even larger lead in active users), and, in most respects, more social functionality. The one area where Facebook lags is search functionality. Would Facebook really want to let Google become the tool for searching social networks—one social search engine “to rule them all“? Or would Facebook prefer to continue developing “social search” in partnership with Bing? On Bing, it can control how its content appears—and Facebook sees Microsoft as a partner, not a rival (at least until it can build its own search functionality inside the web’s hottest property).

Adding to this dynamic, and perhaps ultimately fueling some of the fire against SPYW, is the fact that many Google+ users seem to be multi-homing, using both Facebook and Google+ (and other social networks) at the same time, and even using various aggregators and syncing tools (Start Google+, for example) to unify social media streams and share content among them. Before SPYW, this might have seemed like a boon to Facebook, staunching any potential defectors from its network onto Google+ by keeping them engaged with both, with a kind of “Facebook primacy” ensuring continued eyeball time on its site. But Facebook might see SPYW as a threat to this primacy—in effect, reversing users’ primary “home” as they effectively import their Facebook data into SPYW via their Google+ accounts (such as through Start Google+). If SPYW can effectively facilitate indirect Google searching of private Facebook content, the fears we suggest above may be realized, and more users may forego vistiing Facebook.com (and seeing its advertisers), accessing much of their Facebook content elsewhere—where Facebook cannot monetize their attention.

Amidst all the antitrust hand-wringing over SPYW and Google’s decision to “go it alone” for now, it’s worth noting that Facebook has remained silent. Even Twitter has said little more than a tweet’s worth about the issue. It’s simply not clear that Google’s rivals would even want to participate in SPYW. This could still be bad for consumers, but in that case, the source of the harm, if any, wouldn’t be Google. If this all sounds speculative, it is—and that’s precisely the point. No one really knows. So, again, what’s to argue about on Day 3 of the new social search paradigm?

The Debate to Come: Conditioning Access to SPYW

While Twitter and Facebook may well prefer that Google not index their content on SPYW—at least, not unless Google is willing to pay up—suppose the social networking firms took Google up on its offer to have a “conversation” about greater cooperation. Google hasn’t made clear on what terms it would include content from other social media platforms. So it’s at least conceivable that, when pressed to make good on its lofty-but-vague offer to include other platforms, Google might insist on unacceptable terms. In principle, there are essentially three possibilities here:

  1. Antitrust law requires nothing because there are pro-consumer benefits for Google to make SPYW exclusive and no clear harm to competition (as distinct from harm to competitors) for doing so, as our colleague Josh Wright argues.
  2. Antitrust law requires Google to grant competitors access to SPYW on commercially reasonable terms.
  3. Antitrust law requires Google to grant such access on terms dictated by its competitors, even if unreasonable to Google.

Door #3 is a legal non-starter. In Aspen Skiing v. Aspen Highlands (1985), the Supreme Court came the closest it has ever come to endorsing the “essential facilities” doctrine by which a competitor has a duty to offer its facilities to competitors. But in Verizon Communications v. Trinko (2004), the Court made clear that even Aspen Skiing is “at or near the outer boundary of § 2 liability.” Part of the basis for the decision in Aspen Skiing was the existence of a prior, profitable relationship between the “essential facility” in question and the competitor seeking access. Although the assumption is neither warranted nor sufficient (circumstances change, of course, and merely “profitable” is not the same thing as “best available use of a resource”), the Court in Aspen Skiing seems to have been swayed by the view that the access in question was otherwise profitable for the company that was denying it. Trinko limited the reach of the doctrine to the extraordinary circumstances of Aspen Skiing, and thus, as the Court affirmed in Pacific Bell v. LinkLine (2008), it seems there is no antitrust duty for a firm to offer access to a competitor on commercially unreasonable terms (as Geoff Manne discusses at greater length in his chapter on search bias in TechFreedom’s free ebook, The Next Digital Decade).

So Google either has no duty to deal at all, or a duty to deal only on reasonable terms. But what would a competitor have to show to establish such a duty? And how would “reasonableness” be defined?

First, this issue parallels claims made more generally about Google’s supposed “search bias.” As Josh Wright has said about those claims, “[p]roperly articulated vertical foreclosure theories proffer both that bias is (1) sufficient in magnitude to exclude Google’s rivals from achieving efficient scale, and (2) actually directed at Google’s rivals.” Supposing (for the moment) that the second point could be established, it’s hard to see how Facebook or Twitter could really show that being excluded from SPYW—while still having their available content show up as it always has in Google’s “organic” search results—would actually “render their efforts to compete for distribution uneconomical,” which, as Josh explains, antitrust law would require them to show. Google+ is a tiny service compared to Google or Facebook. And even Google itself, for all the awe and loathing it inspires, lags in the critical metric of user engagement, keeping the average user on site for only a quarter as much time as Facebook.

Moreover, by these same measures, it’s clear that Facebook and Twitter don’t need access to Google search results at all, much less its relatively trivial SPYW results, in order find, and be found by, users; it’s difficult to know from what even vaguely relevant market they could possibly be foreclosed by their absence from SPYW results. Does SPYW potentially help Google+, to Facebook’s detriment? Yes. Just as Facebook’s deal with Microsoft hurts Google. But this is called competition. The world would be a desolate place if antitrust laws effectively prohibited firms from making decisions that helped themselves at their competitors’ expense.

After all, no one seems to be suggesting that Microsoft should be forced to include Google+ results in Bing—and rightly so. Microsoft’s exclusive partnership with Facebook is an important example of how a market leader in one area (Facebook in social) can help a market laggard in another (Microsoft in search) compete more effectively with a common rival (Google). In other words, banning exclusive deals can actually make it more difficult to unseat an incumbent (like Google), especially where the technologies involved are constantly evolving, as here.

Antitrust meddling in such arrangements, particularly in high-risk, dynamic markets where large up-front investments are frequently required (and lost), risks deterring innovation and reducing the very dynamism from which consumers reap such incredible rewards. “Reasonable” is a dangerously slippery concept in such markets, and a recipe for costly errors by the courts asked to define the concept. We suspect that disputes arising out of these sorts of deals will largely boil down to skirmishes over pricing, financing and marketing—the essential dilemma of new media services whose business models are as much the object of innovation as their technologies. Turning these, by little more than innuendo, into nefarious anticompetitive schemes is extremely—and unnecessarily—risky. Continue Reading…