Search Results For Google shopping manne

Today I published an article in The Daily Signal bemoaning the European Commission’s June 27 decision to fine Google $2.7 billion for engaging in procompetitive, consumer welfare-enhancing conduct.  The article is reproduced below (internal hyperlinks omitted), in italics:

On June 27, the European Commission—Europe’s antitrust enforcer—fined Google over $2.7 billion for a supposed violation of European antitrust law that bestowed benefits, not harm, on consumers.

And that’s just for starters. The commission is vigorously pursuing other antitrust investigations of Google that could lead to the imposition of billions of dollars in additional fines by European bureaucrats.

The legal outlook for Google is cloudy at best. Although the commission’s decisions can be appealed to European courts, European Commission bureaucrats have a generally good track record in winning before those tribunals.

But the problem is even bigger than that.

Recently, questionable antitrust probes have grown like topsy around the world, many of them aimed at America’s most creative high-tech firms. Beneficial innovations have become legal nightmares—good for defense lawyers, but bad for free market competition and the health of the American economy.

What great crime did Google commit to merit the huge European Commission fine?

The commission claims that Google favored its own comparison shopping service over others in displaying Google search results.

Never mind that consumers apparently like the shopping-related service links they find on Google (after all, they keep using its search engine in droves), or can patronize any other search engine or specialized comparison shopping service that can be found with a few clicks of the mouse.

This is akin to saying that Kroger or Walmart harm competition when they give favorable shelf space displays to their house brands. That’s ridiculous.

Somehow, such “favoritism” does not prevent consumers from flocking to those successful chains, or patronizing their competitors if they so choose. It is the essence of vigorous free market rivalry.  

The commission’s theory of anticompetitive behavior doesn’t hold water, as I explained in an earlier article. The Federal Trade Commission investigated Google’s search engine practices several years ago and found no evidence that alleged Google search engine display bias harmed consumers.

To the contrary, as former FTC Commissioner (and leading antitrust expert) Josh Wright has pointed out, and as the FTC found:

Google likely benefited consumers by prominently displaying its vertical content on its search results page. The Commission reached this conclusion based upon, among other things, analyses of actual consumer behavior—so-called ‘click through’ data—which showed how consumers reacted to Google’s promotion of its vertical properties.

In short, Google’s search policies benefit consumers. Antitrust is properly concerned with challenging business practices that harm consumer welfare and the overall competitive process, not with propping up particular competitors.

Absent a showing of actual harm to consumers, government antitrust cops—whether in Europe, the U.S., or elsewhere—should butt out.

Unfortunately, the European Commission shows no sign of heeding this commonsense advice. The Europeans have also charged Google with antitrust violations—with multibillion-dollar fines in the offing—based on the company’s promotion of its Android mobile operating service and its AdSense advertising service.

(That’s not all—other European Commission Google inquiries are also pending.)

As in the shopping services case, these investigations appear to be woefully short on evidence of harm to competition and consumer welfare.

The bigger question raised by the Google matters is the ability of any highly successful individual competitor to efficiently promote and favor its own offerings—something that has long been understood by American enforcers to be part and parcel of free-market competition.

As law Professor Michael Carrier points outs, any changes the EU forces on Google’s business model “could eventually apply to any way that Amazon, Facebook or anyone else offers to search for products or services.”

This is troublesome. Successful American information-age companies have already run afoul of the commission’s regulatory cops.

Microsoft and Intel absorbed multibillion-dollar European Commission antitrust fines in recent years, based on other theories of competitive harm. Amazon, Facebook, and Apple, among others, have faced European probes of their competitive practices and “privacy policies”—the terms under which they use or share sensitive information from consumers.

Often, these probes have been supported by less successful rivals who would rather rely on government intervention than competition on the merits.

Of course, being large and innovative is not a legal shield. Market-leading companies merit being investigated for actions that are truly harmful. The law applies equally to everyone.

But antitrust probes of efficient practices that confer great benefits on consumers (think how much the Google search engine makes it easier and cheaper to buy desired products and services and obtain useful information), based merely on the theory that some rivals may lose business, do not advance the free market. They retard it.

Who loses when zealous bureaucrats target efficient business practices by large, highly successful firms, as in the case of the European Commission’s Google probes and related investigations? The general public.

“Platform firms” like Google and Amazon that bring together consumers and other businesses will invest less in improving their search engines and other consumer-friendly features, for fear of being accused of undermining less successful competitors.

As a result, the supply of beneficial innovations will slow, and consumers will be less well off.

What’s more, competition will weaken, as the incentive to innovate to compete effectively with market leaders will be reduced. Regulation and government favor will substitute for welfare-enhancing improvement in goods, services, and platform quality. Economic vitality will inevitably be reduced, to the public’s detriment.

Europe is not the only place where American market leaders face unwarranted antitrust challenges.

For example, Qualcomm and InterDigital, U.S. firms that are leaders in smartphone communications technologies that power mobile interconnections, have faced large antitrust fines for, in essence, “charging too much” for licenses to their patented technologies.

South Korea also claimed to impose a “global remedy” that imposed its artificially low royalty rates on all of Qualcomm’s licensing agreements around the world.

(All this is part and parcel of foreign government attacks on American intellectual property—patents, copyrights, trademarks, and trade secrets—that cost U.S. innovators hundreds of billions of dollars a year.)

 

A lack of basic procedural fairness in certain foreign antitrust proceedings has also bedeviled American companies, preventing them from being able to defend their conduct. Foreign antitrust has sometimes been perverted into a form of “industrial policy” that discriminates against American companies in favor of domestic businesses.

What can be done to confront these problems?

In 2016, the U.S. Chamber of Commerce convened a group of trade and antitrust experts to examine the problem. In March 2017, the chamber released a report by the experts describing the nature of the problem and making specific recommendations for U.S. government action to deal with it.

Specifically, the experts urged that a White House-led interagency task force be set up to develop a strategy for dealing with unwarranted antitrust attacks on American businesses—including both misapplication of legal rules and violations of due process.

The report also called for the U.S. government to work through existing international institutions and trade negotiations to promote a convergence toward sounder antitrust practices worldwide.

The Trump administration should take heed of the experts’ report and act decisively to combat harmful foreign antitrust distortions. Antitrust policy worldwide should focus on helping the competitive process work more efficiently, not on distorting it by shacking successful innovators.

One more point, not mentioned in the article, merits being stressed.  Although the United States Government cannot control a foreign sovereign’s application of its competition law, it can engage in rhetoric and public advocacy aimed at convincing that sovereign to apply its law in a manner that promotes consumer welfare, competition on the merits, and economic efficiency.  Regrettably, the Obama Administration, particularly in the latter part of its second term, did a miserable job in promoting a facts-based, empirical approach to antitrust enforcement, centered on hard facts, not on mere speculative theories of harm.  In particular, certain political appointees lent lip service or silent acquiescence to inappropriate antitrust attacks on the unilateral exercise of intellectual property rights.  In addition, those senior officials made statements that could have been interpreted as supportive of populist “big is bad” conceptions of antitrust that had been discredited decades ago – through sound scholarship, by U.S. enforcement policies, and in judicial decisions.  The Trump Administration will have an opportunity to correct those errors, and to restore U.S. policy leadership in support of sound, pro-free market antitrust principles.  Let us hope that it does so, and soon.

The precise details underlying the European Commission’s (EC) April 15 Statement of Objections (SO), the EC’s equivalent of an antitrust complaint, against Google, centered on the company’s promotion of its comparison shopping service (CSS), “Google Shopping,” have not yet been made public.  Nevertheless, the EC’s fact sheet describing the theory of the case is most discouraging to anyone who believes in economically sound, consumer welfare-oriented antitrust enforcement.   Put simply, the SO alleges that Google is “abusing its dominant position” in online search services throughout Europe by systematically positioning and prominently displaying its CSS in its general search result pages, “irrespective of its merits,” causing the Google CSS to achieve higher rates of growth than CSSs promoted by rivals.  According to the EC, this behavior “has a negative impact on consumers and innovation”.  Why so?  Because this “means that users do not necessarily see the most relevant shopping results in response to their queries, and that incentives to innovate from rivals are lowered as they know that however good their product, they will not benefit from the same prominence as Google’s product.”  (Emphasis added.)  The EC’s proposed solution?  “Google should treat its own comparison shopping services and those of rivals in the same way.”

The EC’s latest action may represent only “the tip of a Google EC antitrust iceberg,” since the EC has stated that it is continuing to investigate other aspects of Google’s behavior, including Google agreements with respect to the Android operating system, plus “the favourable treatment by Google in its general search results of other specialised search services, and concerns with regard to copying of rivals’ web content (known as ‘scraping’), advertising exclusivity and undue restrictions on advertisers.”  For today, I focus on the tip, leaving consideration of the bulk of the iceberg to future commentaries, as warranted.  (Truth on the Market has addressed Google-related antitrust issues previously — see, for example, here, here, and here.)

The EC’s April 15 Google SO is troublesome in multiple ways.

First, the claim that Google does not “necessarily” array the most relevant search results in a manner desired by consumers appears to be in tension with the findings of an exhaustive U.S. antitrust investigation of the company.  As U.S. Federal Trade Commissioner Josh Wright pointed out in a recent speech, the FTC’s 2013 “closing statement [in its Google investigation] indicates that Google’s so-called search bias did not, in fact, harm consumers; to the contrary, the evidence suggested that ‘Google likely benefited consumers by prominently displaying its vertical content on its search results page.’  The Commission reached this conclusion based upon, among other things, analyses of actual consumer behavior – so-called ‘click through’ data – which showed how consumers reacted to Google’s promotion of its vertical properties.”

Second, even assuming that Google’s search engine practices have weakened competing CSSs, that would not justify EC enforcement action against Google.  As Commissioner Wright also explained, the FTC “accepted arguments made by competing websites that Google’s practices injured them and strengthened Google’s market position, but correctly found that these were not relevant considerations in a proper antitrust analysis focused upon consumer welfare rather than harm to competitors.”  The EC should keep this in mind, given that, as former EC Competition Commissioner Joaquin Almunia emphasized, “[c]onsumer welfare is not just a catchy phrase.  It is the cornerstone, the guiding principle of EU competition policy.”

Third, and perhaps most fundamentally, although EC disclaims an interest in “interfere[ing] with” Google’s search engine algorithm, dictating an “equal treatment of competitors” result implicitly would require intrusive micromanagement of Google’s search engine – a search engine which is at the heart of the company’s success and has bestowed enormous welfare benefits on consumers and producers alike.  There is no reason to believe that EC policing of EC CSS listings to promote an “equal protection of competitors” mandate would result in a search experience that better serves consumers than the current Google policy.  Consistent with this point, in its 2013 Google closing statement, the FTC observed that it lacked the ability to “second-guess” product improvements that plausibly benefit consumers, and it stressed that “condemning legitimate product improvements risks harming consumers.”

Fourth, competing CSSs have every incentive to inform consumers if they believe that Google search results are somehow “inferior” to their offerings.  They are free to advertise and publicize the merits of their services, and third party intermediaries that rate browsers may be expected to report if Google Shopping consistently offers suboptimal consumer services.  In short, “the word will get out.”  Even in the absence of perfect information, consumers can readily at low cost browse alternative CSSs to determine whether they prefer their services to Google’s – “help is only a click away.”

Fifth, the most likely outcome of an EC “victory” in this case would be a reduced incentive for Google to invest in improving its search engine, knowing that its ability to monetize search engine improvements could be compromised by future EC decisions to prevent an improved search engine from harming rivals.  What’s worse, other developers of service platforms and other innovative business improvements would similarly “get the message” that it would not be worth their while to innovate to the point of dominance, because their returns to such innovation would be constrained.  In sum, companies in a wide variety of sectors would have less of an incentive to innovate, and this in turn would lead to reduced welfare gains and benefits to consumers.  This would yield (as the EC’s fact sheet put it) “a negative impact on consumers and innovation”, because companies across industries operating in Europe would know that if their product were too good, they would attract the EC’s attention and be put in their place.  In other words, a successful EC intervention here could spawn the very welfare losses (magnified across sectors) that the Commission cited as justification for reining in Google in the first place!

Finally, it should come as no surprise that a coalition of purveyors of competing search engines and online shopping sites lobbied hard for EC antitrust action against Google.  When government intervenes heavily and often in markets to “correct” perceived “abuses,” private actors have a strong incentive to expend resources on achieving government actions that disadvantage their rivals – resources that could otherwise have been used to compete more vigorously and effectively.  In short, the very existence of expansive regulatory schemes disincentivizes competition on the merits, and in that regard tends to undermine welfare.  Government officials should keep that firmly in mind when private actors urge them to act decisively to “cure” marketplace imperfections by limiting a rival’s freedom of action.

Let us hope that the EC takes these concerns to heart before taking further action against Google.

The Wall Street Journal reported yesterday that the FTC Bureau of Competition staff report to the commissioners in the Google antitrust investigation recommended that the Commission approve an antitrust suit against the company.

While this is excellent fodder for a few hours of Twitter hysteria, it takes more than 140 characters to delve into the nuances of a 20-month federal investigation. And the bottom line is, frankly, pretty ho-hum.

As I said recently,

One of life’s unfortunate certainties, as predictable as death and taxes, is this: regulators regulate.

The Bureau of Competition staff is made up of professional lawyers — many of them litigators, whose existence is predicated on there being actual, you know, litigation. If you believe in human fallibility at all, you have to expect that, when they err, FTC staff errs on the side of too much, rather than too little, enforcement.

So is it shocking that the FTC staff might recommend that the Commission undertake what would undoubtedly have been one of the agency’s most significant antitrust cases? Hardly.

Nor is it surprising that the commissioners might not always agree with staff. In fact, staff recommendations are ignored all the time, for better or worse. Here are just a few examples: R.J Reynolds/Brown & Williamson merger, POM Wonderful , Home Shopping Network/QVC merger, cigarette advertising. No doubt there are many, many more.

Regardless, it also bears pointing out that the staff did not recommend the FTC bring suit on the central issue of search bias “because of the strong procompetitive justifications Google has set forth”:

Complainants allege that Google’s conduct is anticompetitive because if forecloses alternative search platforms that might operate to constrain Google’s dominance in search and search advertising. Although it is a close call, we do not recommend that the Commission issue a complaint against Google for this conduct.

But this caveat is enormous. To report this as the FTC staff recommending a case is seriously misleading. Here they are forbearing from bringing 99% of the case against Google, and recommending suit on the marginal 1% issues. It would be more accurate to say, “FTC staff recommends no case against Google, except on a couple of minor issues which will be immediately settled.”

And in fact it was on just these minor issues that Google agreed to voluntary commitments to curtail some conduct when the FTC announced it was not bringing suit against the company.

The Wall Street Journal quotes some other language from the staff report bolstering the conclusion that this is a complex market, the conduct at issue was ambiguous (at worst), and supporting the central recommendation not to sue:

We are faced with a set of facts that can most plausibly be accounted for by a narrative of mixed motives: one in which Google’s course of conduct was premised on its desire to innovate and to produce a high quality search product in the face of competition, blended with the desire to direct users to its own vertical offerings (instead of those of rivals) so as to increase its own revenues. Indeed, the evidence paints a complex portrait of a company working toward an overall goal of maintaining its market share by providing the best user experience, while simultaneously engaging in tactics that resulted in harm to many vertical competitors, and likely helped to entrench Google’s monopoly power over search and search advertising.

On a global level, the record will permit Google to show substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.

This is exactly when you want antitrust enforcers to forbear. Predicting anticompetitive effects is difficult, and conduct that could be problematic is simultaneously potentially vigorous competition.

That the staff concluded that some of what Google was doing “harmed competitors” isn’t surprising — there were lots of competitors parading through the FTC on a daily basis claiming Google harmed them. But antitrust is about protecting consumers, not competitors. Far more important is the staff finding of “substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.”

Indeed, the combination of “substantial innovation,” “intense competition from Microsoft and others,” and “Google’s strong procompetitive justifications” suggests a well-functioning market. It similarly suggests an antitrust case that the FTC would likely have lost. The FTC’s litigators should probably be grateful that the commissioners had the good sense to vote to close the investigation.

Meanwhile, the Wall Street Journal also reports that the FTC’s Bureau of Economics simultaneously recommended that the Commission not bring suit at all against Google. It is not uncommon for the lawyers and the economists at the Commission to disagree. And as a general (though not inviolable) rule, we should be happy when the Commissioners side with the economists.

While the press, professional Google critics, and the company’s competitors may want to make this sound like a big deal, the actual facts of the case and a pretty simple error-cost analysis suggests that not bringing a case was the correct course.

In recent years, the European Union’s (EU) administrative body, the European Commission (EC), increasingly has applied European competition law in a manner that undermines free market dynamics.  In particular, its approach to “dominant” firm conduct disincentivizes highly successful companies from introducing product and service innovations that enhance consumer welfare and benefit the economy – merely because they threaten to harm less efficient competitors.

For example, the EC fined Microsoft 561 million euros in 2013 for its failure to adhere to an order that it offer a version of its Window software suite that did not include its popular Windows Media Player (WMP) – despite the lack of consumer demand for a “dumbed down” Windows without WMP.  This EC intrusion into software design has been described as a regulatory “quagmire.”

In June 2017 the EC fined Google 2.42 billion euros for allegedly favoring its own comparison shopping service over others favored in displaying Google search results – ignoring economic research that shows Google’s search policies benefit consumers.  Google also faces potentially higher EC antitrust fines due to alleged abuses involving android software (bundling of popular Google search and Chrome apps), a product that has helped spur dynamic smartphone innovations and foster new markets.

Furthermore, other highly innovative single firms, such as Apple and Amazon (favorable treatment deemed “state aids”), Qualcomm (alleged anticompetitive discounts), and Facebook (in connection with its WhatsApp acquisition), face substantial EC competition law penalties.

Underlying the EC’s current enforcement philosophy is an implicit presumption that innovations by dominant firms violate competition law if they in any way appear to disadvantage competitors.  That presumption forgoes considering the actual effects on the competitive process of dominant firm activities.  This is a recipe for reduced innovation, as successful firms “pull their competitive punches” to avoid onerous penalties.

The European Court of Justice (ECJ) implicitly recognized this problem in its September 6, 2017 decision setting aside the European General Court’s affirmance of the EC’s 2009 1.06 billion euro fine against Intel.  Intel involved allegedly anticompetitive “loyalty rebates” by Intel, which allowed buyers to achieve cost savings in Intel chip purchases.  In remanding the Intel case to the General Court for further legal and factual analysis, the ECJ’s opinion stressed that the EC needed to do more than find a dominant position and categorize the rebates in order to hold Intel liable.  The EC also needed to assess the “capacity of [Intel’s] . . . practice to foreclose competitors which are at least as efficient” and whether any exclusionary effect was outweighed by efficiencies that also benefit consumers.  In short, evidence-based antitrust analysis was required.  Mere reliance on presumptions was not enough.  Why?  Because competition on the merits is centered on the recognition that the departure of less efficient competitors is part and parcel of consumer welfare-based competition on the merits.  As the ECJ cogently put it:

[I]t must be borne in mind that it is in no way the purpose of Article 102 TFEU [which prohibits abuse of a dominant position] to prevent an undertaking from acquiring, on its own merits, the dominant position on a market.  Nor does that provision seek to ensure that competitors less efficient than the undertaking with the dominant position should remain on the market . . . .  [N]ot every exclusionary effect is necessarily detrimental to competition. Competition on the merits may, by definition, lead to the departure from the market or the marginalisation of competitors that are less efficient and so less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation[.]

Although the ECJ’s recent decision is commendable, it does not negate the fact that Intel had to wait eight years to have its straightforward arguments receive attention – and the saga is far from over, since the General Court has to address this matter once again.  These sorts of long-term delays, during which firms face great uncertainty (and the threat of further EC investigations and fines), are antithetical to innovative activity by enterprises deemed dominant.  In short, unless and until the EC changes its competition policy perspective on dominant firm conduct (and there are no indications that such a change is imminent), innovation and economic dynamism will suffer.

Even if the EC dithers, the United Kingdom’s (UK) imminent withdrawal from the EU (Brexit) provides it with a unique opportunity to blaze a new competition policy trail – and perhaps in so doing influence other jurisdictions.

In particular, Brexit will enable the UK’s antitrust enforcer, the Competition and Markets Authority (CMA), to adopt an outlook on competition policy in general – and on single firm conduct in particular – that is more sensitive to innovation and economic dynamism.  What might such a CMA enforcement policy look like?  It should reject the EC’s current approach.  It should focus instead on the actual effects of competitive activity.  In particular, it should incorporate the insights of decision theory (see here, for example) and place great weight on efficiencies (see here, for example).

Let us hope that the CMA acts boldly – carpe diem.  Such action, combined with other regulatory reforms, could contribute substantially to the economic success of Brexit (see here).

I recently published a piece in the Hill welcoming the Canadian Supreme Court’s decision in Google v. Equustek. In this post I expand (at length) upon my assessment of the case.

In its decision, the Court upheld injunctive relief against Google, directing the company to avoid indexing websites offering the infringing goods in question, regardless of the location of the sites (and even though Google itself was not a party in the case nor in any way held liable for the infringement). As a result, the Court’s ruling would affect Google’s conduct outside of Canada as well as within it.

The case raises some fascinating and thorny issues, but, in the end, the Court navigated them admirably.

Some others, however, were not so… welcoming of the decision (see, e.g., here and here).

The primary objection to the ruling seems to be, in essence, that it is the top of a slippery slope: “If Canada can do this, what’s to stop Iran or China from doing it? Free expression as we know it on the Internet will cease to exist.”

This is a valid concern, of course — in the abstract. But for reasons I explain below, we should see this case — and, more importantly, the approach adopted by the Canadian Supreme Court — as reassuring, not foreboding.

Some quick background on the exercise of extraterritorial jurisdiction in international law

The salient facts in, and the fundamental issue raised by, the case were neatly summarized by Hugh Stephens:

[The lower Court] issued an interim injunction requiring Google to de-index or delist (i.e. not return search results for) the website of a firm (Datalink Gateways) that was marketing goods online based on the theft of trade secrets from Equustek, a Vancouver, B.C., based hi-tech firm that makes sophisticated industrial equipment. Google wants to quash a decision by the lower courts on several grounds, primarily that the basis of the injunction is extra-territorial in nature and that if Google were to be subject to Canadian law in this case, this could open a Pandora’s box of rulings from other jurisdictions that would require global delisting of websites thus interfering with freedom of expression online, and in effect “break the Internet”.

The question of jurisdiction with regard to cross-border conduct is clearly complicated and evolving. But, in important ways, it isn’t anything new just because the Internet is involved. As Jack Goldsmith and Tim Wu (yes, Tim Wu) wrote (way back in 2006) in Who Controls the Internet?: Illusions of a Borderless World:

A government’s responsibility for redressing local harms caused by a foreign source does not change because the harms are caused by an Internet communication. Cross-border harms that occur via the Internet are not any different than those outside the Net. Both demand a response from governmental authorities charged with protecting public values.

As I have written elsewhere, “[g]lobal businesses have always had to comply with the rules of the territories in which they do business.”

Traditionally, courts have dealt with the extraterritoriality problem by applying a rule of comity. As my colleague, Geoffrey Manne (Founder and Executive Director of ICLE), reminds me, the principle of comity largely originated in the work of the 17th Century Dutch legal scholar, Ulrich Huber. Huber wrote that comitas gentium (“courtesy of nations”) required the application of foreign law in certain cases:

[Sovereigns will] so act by way of comity that rights acquired within the limits of a government retain their force everywhere so far as they do not cause prejudice to the powers or rights of such government or of their subjects.

And, notably, Huber wrote that:

Although the laws of one nation can have no force directly with another, yet nothing could be more inconvenient to commerce and to international usage than that transactions valid by the law of one place should be rendered of no effect elsewhere on account of a difference in the law.

The basic principle has been recognized and applied in international law for centuries. Of course, the flip side of the principle is that sovereign nations also get to decide for themselves whether to enforce foreign law within their jurisdictions. To summarize Huber (as well as Lord Mansfield, who brought the concept to England, and Justice Story, who brought it to the US):

All three jurists were concerned with deeply polarizing public issues — nationalism, religious factionalism, and slavery. For each, comity empowered courts to decide whether to defer to foreign law out of respect for a foreign sovereign or whether domestic public policy should triumph over mere courtesy. For each, the court was the agent of the sovereign’s own public law.

The Canadian Supreme Court’s well-reasoned and admirably restrained approach in Equustek

Reconciling the potential conflict between the laws of Canada and those of other jurisdictions was, of course, a central subject of consideration for the Canadian Court in Equustek. The Supreme Court, as described below, weighed a variety of factors in determining the appropriateness of the remedy. In analyzing the competing equities, the Supreme Court set out the following framework:

[I]s there a serious issue to be tried; would the person applying for the injunction suffer irreparable harm if the injunction were not granted; and is the balance of convenience in favour of granting the interlocutory injunction or denying it. The fundamental question is whether the granting of an injunction is just and equitable in all of the circumstances of the case. This will necessarily be context-specific. [Here, as throughout this post, bolded text represents my own, added emphasis.]

Applying that standard, the Court held that because ordering an interlocutory injunction against Google was the only practical way to prevent Datalink from flouting the court’s several orders, and because there were no sufficient, countervailing comity or freedom of expression concerns in this case that would counsel against such an order being granted, the interlocutory injunction was appropriate.

I draw particular attention to the following from the Court’s opinion:

Google’s argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction is, with respect, theoretical. As Fenlon J. noted, “Google acknowledges that most countries will likely recognize intellectual property rights and view the selling of pirated products as a legal wrong”.

And while it is always important to pay respectful attention to freedom of expression concerns, particularly when dealing with the core values of another country, I do not see freedom of expression issues being engaged in any way that tips the balance of convenience towards Google in this case. As Groberman J.A. concluded:

In the case before us, there is no realistic assertion that the judge’s order will offend the sensibilities of any other nation. It has not been suggested that the order prohibiting the defendants from advertising wares that violate the intellectual property rights of the plaintiffs offends the core values of any nation. The order made against Google is a very limited ancillary order designed to ensure that the plaintiffs’ core rights are respected.

In fact, as Andrew Keane Woods writes at Lawfare:

Under longstanding conflicts of laws principles, a court would need to weigh the conflicting and legitimate governments’ interests at stake. The Canadian court was eager to undertake that comity analysis, but it couldn’t do so because the necessary ingredient was missing: there was no conflict of laws.

In short, the Canadian Supreme Court, while acknowledging the importance of comity and appropriate restraint in matters with extraterritorial effect, carefully weighed the equities in this case and found that they favored the grant of extraterritorial injunctive relief. As the Court explained:

Datalink [the direct infringer] and its representatives have ignored all previous court orders made against them, have left British Columbia, and continue to operate their business from unknown locations outside Canada. Equustek has made efforts to locate Datalink with limited success. Datalink is only able to survive — at the expense of Equustek’s survival — on Google’s search engine which directs potential customers to Datalink’s websites. This makes Google the determinative player in allowing the harm to occur. On balance, since the world‑wide injunction is the only effective way to mitigate the harm to Equustek pending the trial, the only way, in fact, to preserve Equustek itself pending the resolution of the underlying litigation, and since any countervailing harm to Google is minimal to non‑existent, the interlocutory injunction should be upheld.

As I have stressed, key to the Court’s reasoning was its close consideration of possible countervailing concerns and its entirely fact-specific analysis. By the very terms of the decision, the Court made clear that its balancing would not necessarily lead to the same result where sensibilities or core values of other nations would be offended. In this particular case, they were not.

How critics of the decision (and there are many) completely miss the true import of the Court’s reasoning

In other words, the holding in this case was a function of how, given the facts of the case, the ruling would affect the particular core concerns at issue: protection and harmonization of global intellectual property rights on the one hand, and concern for the “sensibilities of other nations,” including their concern for free expression, on the other.

This should be deeply reassuring to those now criticizing the decision. And yet… it’s not.

Whether because they haven’t actually read or properly understood the decision, or because they are merely grandstanding, some commenters are proclaiming that the decision marks the End Of The Internet As We Know It — you know, it’s going to break the Internet. Or something.

Human Rights Watch, an organization I generally admire, issued a statement including the following:

The court presumed no one could object to delisting someone it considered an intellectual property violator. But other countries may soon follow this example, in ways that more obviously force Google to become the world’s censor. If every country tries to enforce its own idea of what is proper to put on the Internet globally, we will soon have a race to the bottom where human rights will be the loser.

The British Columbia Civil Liberties Association added:

Here it was technical details of a product, but you could easily imagine future cases where we might be talking about copyright infringement, or other things where people in private lawsuits are wanting things to be taken down off  the internet that are more closely connected to freedom of expression.

From the other side of the traditional (if insufficiently nuanced) “political spectrum,” AEI’s Ariel Rabkin asserted that

[O]nce we concede that Canadian courts can regulate search engine results in Turkey, it is hard to explain why a Turkish court shouldn’t have the reciprocal right. And this is no hypothetical — a Turkish court has indeed ordered Twitter to remove a user (AEI scholar Michael Rubin) within the United States for his criticism of Erdogan. Once the jurisdictional question is decided, it is no use raising free speech as an issue. Other countries do not have our free speech norms, nor Canada’s. Once Canada concedes that foreign courts have the right to regulate Canadian search results, they are on the internet censorship train, and there is no egress before the end of the line.

In this instance, in particular, it is worth noting not only the complete lack of acknowledgment of the Court’s articulated constraints on taking action with extraterritorial effect, but also the fact that Turkey (among others) has hardly been waiting for approval from Canada before taking action.   

And then there’s EFF (of course). EFF, fairly predictably, suggests first — with unrestrained hyperbole — that the Supreme Court held that:

A country has the right to prevent the world’s Internet users from accessing information.

Dramatic hyperbole aside, that’s also a stilted way to characterize the content at issue in the case. But it is important to EFF’s misleading narrative to begin with the assertion that offering infringing products for sale is “information” to which access by the public is crucial. But, of course, the distribution of infringing products is hardly “expression,” as most of us would understand that term. To claim otherwise is to denigrate the truly important forms of expression that EFF claims to want to protect.

And, it must be noted, even if there were expressive elements at issue, infringing “expression” is always subject to restriction under the copyright laws of virtually every country in the world (and free speech laws, where they exist).

Nevertheless, EFF writes that the decision:

[W]ould cut off access to information for U.S. users would set a dangerous precedent for online speech. In essence, it would expand the power of any court in the world to edit the entire Internet, whether or not the targeted material or site is lawful in another country. That, we warned, is likely to result in a race to the bottom, as well-resourced individuals engage in international forum-shopping to impose the one country’s restrictive laws regarding free expression on the rest of the world.

Beyond the flaws of the ruling itself, the court’s decision will likely embolden other countries to try to enforce their own speech-restricting laws on the Internet, to the detriment of all users. As others have pointed out, it’s not difficult to see repressive regimes such as China or Iran use the ruling to order Google to de-index sites they object to, creating a worldwide heckler’s veto.

As always with EFF missives, caveat lector applies: None of this is fair or accurate. EFF (like the other critics quoted above) is looking only at the result — the specific contours of the global order related to the Internet — and not to the reasoning of the decision itself.

Quite tellingly, EFF urges its readers to ignore the case in front of them in favor of a theoretical one. That is unfortunate. Were EFF, et al. to pay closer attention, they would be celebrating this decision as a thoughtful, restrained, respectful, and useful standard to be employed as a foundational decision in the development of global Internet governance.

The Canadian decision is (as I have noted, but perhaps still not with enough repetition…) predicated on achieving equity upon close examination of the facts, and giving due deference to the sensibilities and core values of other nations in making decisions with extraterritorial effect.

Properly understood, the ruling is a shield against intrusions that undermine freedom of expression, and not an attack on expression.

EFF subverts the reasoning of the decision and thus camouflages its true import, all for the sake of furthering its apparently limitless crusade against all forms of intellectual property. The ruling can be read as an attack on expression only if one ascribes to the distribution of infringing products the status of protected expression — so that’s what EFF does. But distribution of infringing products is not protected expression.

Extraterritoriality on the Internet is complicated — but that undermines, rather than justifies, critics’ opposition to the Court’s analysis

There will undoubtedly be other cases that present more difficult challenges than this one in defining the jurisdictional boundaries of courts’ abilities to address Internet-based conduct with multi-territorial effects. But the guideposts employed by the Supreme Court of Canada will be useful in informing such decisions.

Of course, some states don’t (or won’t, when it suits them), adhere to principles of comity. But that was true long before the Equustek decision. And, frankly, the notion that this decision gives nations like China or Iran political cover for global censorship is ridiculous. Nations that wish to censor the Internet will do so regardless. If anything, reference to this decision (which, let me spell it out again, highlights the importance of avoiding relief that would interfere with core values or sensibilities of other nations) would undermine their efforts.

Rather, the decision will be far more helpful in combating censorship and advancing global freedom of expression. Indeed, as noted by Hugh Stephens in a recent blog post:

While the EFF, echoed by its Canadian proxy OpenMedia, went into hyperventilation mode with the headline, “Top Canadian Court permits Worldwide Internet Censorship”, respected organizations like the Canadian Civil Liberties Association (CCLA) welcomed the decision as having achieved the dual objectives of recognizing the importance of freedom of expression and limiting any order that might violate that fundamental right. As the CCLA put it,

While today’s decision upholds the worldwide order against Google, it nevertheless reflects many of the freedom of expression concerns CCLA had voiced in our interventions in this case.

As I noted in my piece in the Hill, this decision doesn’t answer all of the difficult questions related to identifying proper jurisdiction and remedies with respect to conduct that has global reach; indeed, that process will surely be perpetually unfolding. But, as reflected in the comments of the Canadian Civil Liberties Association, it is a deliberate and well-considered step toward a fair and balanced way of addressing Internet harms.

With apologies for quoting myself, I noted the following in an earlier piece:

I’m not unsympathetic to Google’s concerns. As a player with a global footprint, Google is legitimately concerned that it could be forced to comply with the sometimes-oppressive and often contradictory laws of countries around the world. But that doesn’t make it — or any other Internet company — unique. Global businesses have always had to comply with the rules of the territories in which they do business… There will be (and have been) cases in which taking action to comply with the laws of one country would place a company in violation of the laws of another. But principles of comity exist to address the problem of competing demands from sovereign governments.

And as Andrew Keane Woods noted:

Global takedown orders with no limiting principle are indeed scary. But Canada’s order has a limiting principle. As long as there is room for Google to say to Canada (or France), “Your order will put us in direct and significant violation of U.S. law,” the order is not a limitless assertion of extraterritorial jurisdiction. In the instance that a service provider identifies a conflict of laws, the state should listen.

That is precisely what the Canadian Supreme Court’s decision contemplates.

No one wants an Internet based on the lowest common denominator of acceptable speech. Yet some appear to want an Internet based on the lowest common denominator for the protection of original expression. These advocates thus endorse theories of jurisdiction that would deny societies the ability to enforce their own laws, just because sometimes those laws protect intellectual property.

And yet that reflects little more than an arbitrary prioritization of those critics’ personal preferences. In the real world (including the real online world), protection of property is an important value, deserving reciprocity and courtesy (comity) as much as does speech. Indeed, the G20 Digital Economy Ministerial Declaration adopted in April of this year recognizes the importance to the digital economy of promoting security and trust, including through the provision of adequate and effective intellectual property protection. Thus the Declaration expresses the recognition of the G20 that:

[A]pplicable frameworks for privacy and personal data protection, as well as intellectual property rights, have to be respected as they are essential to strengthening confidence and trust in the digital economy.

Moving forward in an interconnected digital universe will require societies to make a series of difficult choices balancing both competing values and competing claims from different jurisdictions. Just as it does in the offline world, navigating this path will require flexibility and skepticism (if not rejection) of absolutism — including with respect to the application of fundamental values. Even things like freedom of expression, which naturally require a balancing of competing interests, will need to be reexamined. We should endeavor to find that fine line between allowing individual countries to enforce their own national judgments and a tolerance for those countries that have made different choices. This will not be easy, as well manifested in something that Alice Marwick wrote earlier this year:

But a commitment to freedom of speech above all else presumes an idealistic version of the internet that no longer exists. And as long as we consider any content moderation to be censorship, minority voices will continue to be drowned out by their aggressive majority counterparts.

* * *

We need to move beyond this simplistic binary of free speech/censorship online. That is just as true for libertarian-leaning technologists as it is neo-Nazi provocateurs…. Aggressive online speech, whether practiced in the profanity and pornography-laced environment of 4Chan or the loftier venues of newspaper comments sections, positions sexism, racism, and anti-Semitism (and so forth) as issues of freedom of expression rather than structural oppression.

Perhaps we might want to look at countries like Canada and the United Kingdom, which take a different approach to free speech than does the United States. These countries recognize that unlimited free speech can lead to aggression and other tactics which end up silencing the speech of minorities — in other words, the tyranny of the majority. Creating online communities where all groups can speak may mean scaling back on some of the idealism of the early internet in favor of pragmatism. But recognizing this complexity is an absolutely necessary first step.

While I (and the Canadian Supreme Court, for that matter) share EFF’s unease over the scope of extraterritorial judgments, I fundamentally disagree with EFF that the Equustek decision “largely sidesteps the question of whether such a global order would violate foreign law or intrude on Internet users’ free speech rights.”

In fact, it is EFF’s position that comes much closer to a position indifferent to the laws and values of other countries; in essence, EFF’s position would essentially always prioritize the particular speech values adopted in the US, regardless of whether they had been adopted by the countries affected in a dispute. It is therefore inconsistent with the true nature of comity.

Absolutism and exceptionalism will not be a sound foundation for achieving global consensus and the effective operation of law. As stated by the Canadian Supreme Court in Equustek, courts should enforce the law — whatever the law is — to the extent that such enforcement does not substantially undermine the core sensitivities or values of nations where the order will have effect.

EFF ignores the process in which the Court engaged precisely because EFF — not another country, but EFF — doesn’t find the enforcement of intellectual property rights to be compelling. But that unprincipled approach would naturally lead in a different direction where the court sought to protect a value that EFF does care about. Such a position arbitrarily elevates EFF’s idiosyncratic preferences. That is simply not a viable basis for constructing good global Internet governance.

If the Internet is both everywhere and nowhere, our responses must reflect that reality, and be based on the technology-neutral application of laws, not the abdication of responsibility premised upon an outdated theory of tech exceptionalism under which cyberspace is free from the application of the laws of sovereign nations. That is not the path to either freedom or prosperity.

To realize the economic and social potential of the Internet, we must be guided by both a determination to meaningfully address harms, and a sober reservation about interfering in the affairs of other states. The Supreme Court of Canada’s decision in Google v. Equustek has planted a flag in this space. It serves no one to pretend that the Court decided that a country has the unfettered right to censor the Internet. That’s not what it held — and we should be grateful for that. To suggest otherwise may indeed be self-fulfilling.

It appears that White House’s zeal for progressive-era legal theory has … progressed (or regressed?) further. Late last week President Obama signed an Executive Order that nominally claims to direct executive agencies (and “strongly encourages” independent agencies) to adopt “pro-competitive” policies. It’s called Steps to Increase Competition and Better Inform Consumers and Workers to Support Continued Growth of the American Economy, and was produced alongside an issue brief from the Council of Economic Advisors titled Benefits of Competition and Indicators of Market Power.

TL;DR version: the Order and its brief do not appear so much aimed at protecting consumers or competition, as they are at providing justification for favored regulatory adventures.

In truth, it’s not exactly clear what problem the President is trying to solve. And there is language in both the Order and the brief that could be interpreted in a positive light, and, likewise, language that could be more of a shot across the bow of “unruly” corporate citizens who have not gotten in line with the President’s agenda. Most of the Order and the corresponding CEA brief read as a rote recital of basic antitrust principles: price fixing bad, collusion bad, competition good. That said, there were two items in the Order that particularly stood out.

The (Maybe) Good

Section 2 of the Order states that

Executive departments … with authorities that could be used to enhance competition (agencies) shall … use those authorities to promote competition, arm consumers and workers with the information they need to make informed choices, and eliminate regulations that restrict competition without corresponding benefits to the American public. (emphasis added)

Obviously this is music to the ears of anyone who has thought that agencies should be required to do a basic economic analysis before undertaking brave voyages of regulatory adventure. And this is what the Supreme Court was getting at in Michigan v. EPA when it examined the meaning of the phrase “appropriate” in connection with environmental regulations:

One would not say that it is even rational, never mind “appropriate,” to impose billions of dollars in economic costs in return for a few dollars in health or environmental benefits.

Thus, if this Order follows the direction of Michigan v. EPA, and it becomes the standard for agencies to conduct cost-benefit analyses before issuing regulation (and to review old regulations through such an analysis), then wonderful! Moreover, this mandate to agencies to reduce regulations that restrict competition could lead to an unexpected reformation of a variety of regulations – even outside of the agencies themselves. For instance, the FTC is laudable in its ongoing efforts both to correct anticompetitive state licensing laws as well as to resist state-protected incumbents, such as taxi-cab companies.

Still, I have trouble believing that the President — and this goes for any president, really, regardless of party — would truly intend for agencies under his control to actually cede regulatory ground when a little thing like economic reality points in a different direction than official policy. After all, there was ample information available that the Title II requirements on broadband providers would be both costly and result in reduced capital expenditures, and the White House nonetheless encouraged the FCC to go ahead with reclassification.

And this isn’t the first time that the President has directed agencies to perform retrospective review of regulation (see the Identifying and Reducing Regulatory Burdens Order of 2012). To date, however, there appears to be little evidence that the burdens of the regulatory state have lessened. Last year set a record for the page count of the Federal Register (80k+ pages), and the data suggest that the cost of the regulatory state is only increasing. Thus, despite the pleasant noises the Order makes with regard to imposing economic discipline on agencies – and despite the good example Canada has set for us in this regard – I am not optimistic of the actual result.

And the (maybe) good builds an important bridge to the (probably) bad of the Order. It is well and good to direct agencies to engage in economic calculation when they write and administer regulations, but such calculation must be in earnest, and must be directed by the learning that was hard earned over the course of the development of antitrust jurisprudence in the US. As Geoffrey Manne and Josh Wright have noted:

Without a serious methodological commitment to economic science, the incorporation of economics into antitrust is merely a façade, allowing regulators and judges to select whichever economic model fits their earlier beliefs or policy preferences rather than the model that best fits the real‐world data. Still, economic theory remains essential to antitrust law. Economic analysis constrains and harnesses antitrust law so that it protects consumers rather than competitors.

Unfortunately, the brief does not indicate that it is interested in more than a façade of economic rigor. For instance, it relies on the outmoded 50 firm revenue concentration numbers gathered by the Census Bureau to support the proposition that the industries themselves are highly concentrated and, therefore, are anticompetitive. But, it’s been fairly well understood since the 1970s that concentration says nothing directly about monopoly power and its exercise. In fact, concentration can often be seen as an indicator of superior efficiency that results in better outcomes for consumers (depending on the industry).

The (Probably) Bad

Apart from general concerns (such as having a host of federal agencies with no antitrust expertise now engaging in competition turf wars) there is one specific area that could have a dramatically bad result for long term policy, and that moreover reflects either ignorance or willful blindness of antitrust jurisprudence. Specifically, the Order directs agencies to

identify specific actions that they can take in their areas of responsibility to build upon efforts to detect abuses such as price fixing, anticompetitive behavior in labor and other input markets, exclusionary conduct, and blocking access to critical resources that are needed for competitive entry. (emphasis added).

It then goes on to say that

agencies shall submit … an initial list of … any specific practices, such as blocking access to critical resources, that potentially restrict meaningful consumer or worker choice or unduly stifle new market entrants (emphasis added)

The generally uncontroversial language regarding price fixing and exclusionary conduct are bromides – after all, as the Order notes, we already have the FTC and DOJ very actively policing this sort of conduct. What’s novel here, however, is that the highlighted language above seems to amount to a mandate to executive agencies (and a strong suggestion to independent agencies) that they begin to seek out “essential facilities” within their regulated industries.

But “critical resources … needed for competitive entry” could mean nearly anything, depending on how you define competition and relevant markets. And asking non-antitrust agencies to integrate one of the more esoteric (and controversial) parts of antitrust law into their mission is going to be a recipe for disaster.

In fact, this may be one of the reasons why the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.

In short, the essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.” One important reason for the broad criticism is because

At bottom, a plaintiff … is saying that the defendant has a valuable facility that it would be difficult to reproduce … But … the fact that the defendant has a highly valued facility is a reason to reject sharing, not to require it, since forced sharing “may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” (quoting Trinko)

Further, it’s really hard to say when one business is so critical to a particular market that its own internal functions need to be exposed for competitors’ advantage. For instance, is Big Data – which the CEA brief specifically notes as a potential “critical resource” — an essential facility when one company serves so many consumers that it has effectively developed an entire market that it dominates? ( In case you are wondering, it’s actually not). When exactly does a firm so outcompete its rivals that access to its business infrastructure can be seen by regulators as “essential” to competition? And is this just a set-up for punishing success — which hardly promotes competition, innovation or consumer welfare?

And, let’s be honest here, when the CEA is considering Big Data as an essential facility they are at least partially focused on Google and its various search properties. Google is frequently the target for “essentialist” critics who argue, among other things, that Google’s prioritization of its own properties in its own search results violates antitrust rules. The story goes that Google search is so valuable that when Google publishes its own shopping results ahead of its various competitors, it is engaging in anticompetitive conduct. But this is a terribly myopic view of what the choices are for search services because, as Geoffrey Manne has so ably noted before, “competitors denied access to the top few search results at Google’s site are still able to advertise their existence and attract users through a wide range of other advertising outlets[.]”

Moreover, as more and more users migrate to specialized apps on their mobile devices for a variety of content, Google’s desktop search becomes just one choice among many for finding information. All of this leaves to one side, of course, the fact that for some categories, Google has incredibly stiff competition.

Thus it is that

to the extent that inclusion in Google search results is about “Stiglerian” search-cost reduction for websites (and it can hardly be anything else), the range of alternate facilities for this function is nearly limitless.

The troubling thing here is that, given the breezy analysis of the Order and the CEA brief, I don’t think the White House is really considering the long-term legal and economic implications of its command; the Order appears to be much more about political support for favored agency actions already under way.

Indeed, despite the length of the CEA brief and the variety of antitrust principles recited in the Order itself, an accompanying release points to what is really going on (at least in part). The White House, along with the FCC, seems to think that the embedded streams in a cable or satellite broadcast should be considered a form of essential facility that is an indispensable component of video consumers’ choice (which is laughable given the magnitude of choice in video consumption options that consumers enjoy today).

And, to the extent that courts might apply the (controversial) essential facilities doctrine, an “indispensable requirement … is the unavailability of access to the ‘essential facilities’[.]” This is clearly not the case with much of what the CEA brief points to as examples of ostensibly laudable pro-competitive regulation.

The doctrine wouldn’t apply, for instance, to the FCC’s Open Internet Order since edge providers have access to customers over networks, even where network providers want to zero-rate, employ usage-based billing or otherwise negotiate connection fees and prioritization. And it also doesn’t apply to the set-top box kerfuffle; while third-parties aren’t able to access the video streams that make-up a cable broadcast, the market for consuming those streams is a single part of the entire video ecosystem. What really matters there is access to viewers, and the ability to provide services to consumers and compete for their business.

Yet, according to the White House, “the set-top box is the mascot” for the administration’s competition Order, because, apparently, cable boxes represent “what happens when you don’t have the choice to go elsewhere.” ( “Elsewhere” to the White House, I assume, cannot include Roku, Apple TV, Hulu, Netflix, and a myriad of other video options  that consumers can currently choose among.)

The set-top box is, according to the White House, a prime example of the problem that

[a]cross our economy, too many consumers are dealing with inferior or overpriced products, too many workers aren’t getting the wage increases they deserve, too many entrepreneurs and small businesses are getting squeezed out unfairly by their bigger competitors, and overall we are not seeing the level of innovative growth we would like to see.

This is, of course, nonsense. Consumers enjoy an incredible amount of low-cost, high quality goods (including video options) – far more than at any point in history.  After all:

From cable to Netflix to Roku boxes to Apple TV to Amazon FireStick, we have more ways to find and watch TV than ever — and we can do so in our living rooms, on our phones and tablets, and on seat-back screens at 30,000 feet. Oddly enough, FCC Chairman Tom Wheeler … agrees: “American consumers enjoy unprecedented choice in how they view entertainment, news and sports programming. You can pretty much watch what you want, where you want, when you want.”

Thus, I suspect that the White House has its eye on a broader regulatory agenda.

For instance, the Department of Labor recently announced that it would be extending its reach in the financial services industry by changing the standard for when financial advice might give rise to a fiduciary relationship under ERISA. It seems obvious that the SEC or FINRA could have taken up the slack for any financial services regulatory issues – it’s certainly within their respective wheelhouses. But that’s not the direction the administration took, possibly because SEC and FINRA are independent agencies. Thus, the DOL – an agency with substantially less financial and consumer protection experience than either the SEC or FINRA — has expansive new authority.

And that’s where more of the language in the Order comes into focus. It directs agencies to “ensur[e] that consumers and workers have access to the information needed to make informed choices[.]” The text of the DOL rule develops for itself a basis in competition law as well:

The current proposal’s defined boundaries between fiduciary advice, education, and sales activity directed at large plans, may bring greater clarity to the IRA and plan services markets. Innovation in new advice business models, including technology-driven models, may be accelerated, and nudged away from conflicts and toward transparency, thereby promoting healthy competition in the fiduciary advice market.

Thus, it’s hard to see what the White House is doing in the Order, other than laying the groundwork for expansive authority of non-independent executive agencies under the thin guise of promoting competition. Perhaps the President believes that couching this expansion in free market terms ( i.e. that its “pro-competition”) will somehow help the initiatives go through with minimal friction. But there is nothing in the Order or the CEA brief to provide any confidence that competition will, in fact, be promoted. And in the end I have trouble seeing how this sort of regulatory adventurism does not run afoul of separation of powers issues, as well as assorted other legal challenges.

Finally, conjuring up a regulatory version of the essential facilities doctrine as a support for this expansion is simply a terrible idea — one that smacks much more of industrial policy than of sound regulatory reform or consumer protection.

Recent years have seen an increasing interest in incorporating privacy into antitrust analysis. The FTC and regulators in Europe have rejected these calls so far, but certain scholars and activists continue their attempts to breathe life into this novel concept. Elsewhere we have written at length on the scholarship addressing the issue and found the case for incorporation wanting. Among the errors proponents make is a persistent (and woefully unsubstantiated) assertion that online data can amount to a barrier to entry, insulating incumbent services from competition and ensuring that only the largest providers thrive. This data barrier to entry, it is alleged, can then allow firms with monopoly power to harm consumers, either directly through “bad acts” like price discrimination, or indirectly by raising the costs of advertising, which then get passed on to consumers.

A case in point was on display at last week’s George Mason Law & Economics Center Briefing on Big Data, Privacy, and Antitrust. Building on their growing body of advocacy work, Nathan Newman and Allen Grunes argued that this hypothesized data barrier to entry actually exists, and that it prevents effective competition from search engines and social networks that are interested in offering services with heightened privacy protections.

According to Newman and Grunes, network effects and economies of scale ensure that dominant companies in search and social networking (they specifically named Google and Facebook — implying that they are in separate markets) operate without effective competition. This results in antitrust harm, they assert, because it precludes competition on the non-price factor of privacy protection.

In other words, according to Newman and Grunes, even though Google and Facebook offer their services for a price of $0 and constantly innovate and upgrade their products, consumers are nevertheless harmed because the business models of less-privacy-invasive alternatives are foreclosed by insufficient access to data (an almost self-contradicting and silly narrative for many reasons, including the big question of whether consumers prefer greater privacy protection to free stuff). Without access to, and use of, copious amounts of data, Newman and Grunes argue, the algorithms underlying search and targeted advertising are necessarily less effective and thus the search product without such access is less useful to consumers. And even more importantly to Newman, the value to advertisers of the resulting consumer profiles is diminished.

Newman has put forth a number of other possible antitrust harms that purportedly result from this alleged data barrier to entry, as well. Among these is the increased cost of advertising to those who wish to reach consumers. Presumably this would harm end users who have to pay more for goods and services because the costs of advertising are passed on to them. On top of that, Newman argues that ad networks inherently facilitate price discrimination, an outcome that he asserts amounts to antitrust harm.

FTC Commissioner Maureen Ohlhausen (who also spoke at the George Mason event) recently made the case that antitrust law is not well-suited to handling privacy problems. She argues — convincingly — that competition policy and consumer protection should be kept separate to preserve doctrinal stability. Antitrust law deals with harms to competition through the lens of economic analysis. Consumer protection law is tailored to deal with broader societal harms and aims at protecting the “sanctity” of consumer transactions. Antitrust law can, in theory, deal with privacy as a non-price factor of competition, but this is an uneasy fit because of the difficulties of balancing quality over two dimensions: Privacy may be something some consumers want, but others would prefer a better algorithm for search and social networks, and targeted ads with free content, for instance.

In fact, there is general agreement with Commissioner Ohlhausen on her basic points, even among critics like Newman and Grunes. But, as mentioned above, views diverge over whether there are some privacy harms that should nevertheless factor into competition analysis, and on whether there is in fact  a data barrier to entry that makes these harms possible.

As we explain below, however, the notion of data as an antitrust-relevant barrier to entry is simply a myth. And, because all of the theories of “privacy as an antitrust harm” are essentially predicated on this, they are meritless.

First, data is useful to all industries — this is not some new phenomenon particular to online companies

It bears repeating (because critics seem to forget it in their rush to embrace “online exceptionalism”) that offline retailers also receive substantial benefit from, and greatly benefit consumers by, knowing more about what consumers want and when they want it. Through devices like coupons and loyalty cards (to say nothing of targeted mailing lists and the age-old practice of data mining check-out receipts), brick-and-mortar retailers can track purchase data and better serve consumers. Not only do consumers receive better deals for using them, but retailers know what products to stock and advertise and when and on what products to run sales. For instance:

  • Macy’s analyzes tens of millions of terabytes of data every day to gain insights from social media and store transactions. Over the past three years, the use of big data analytics alone has helped Macy’s boost its revenue growth by 4 percent annually.
  • Following its acquisition of Kosmix in 2011, Walmart established @WalmartLabs, which created its own product search engine for online shoppers. In the first year of its use alone, the number of customers buying a product on Walmart.com after researching a purchase increased by 20 percent. According to Ron Bensen, the vice president of engineering at @WalmartLabs, the combination of in-store and online data could give brick-and-mortar retailers like Walmart an advantage over strictly online stores.
  • Panera and a whole host of restaurants, grocery stores, drug stores and retailers use loyalty cards to advertise and learn about consumer preferences.

And of course there is a host of others uses for data, as well, including security, fraud prevention, product optimization, risk reduction to the insured, knowing what content is most interesting to readers, etc. The importance of data stretches far beyond the online world, and far beyond mere retail uses more generally. To describe even online giants like Amazon, Apple, Microsoft, Facebook and Google as having a monopoly on data is silly.

Second, it’s not the amount of data that leads to success but building a better mousetrap

The value of knowing someone’s birthday, for example, is not in that tidbit itself, but in the fact that you know this is a good day to give that person a present. Most of the data that supports the advertising networks underlying the Internet ecosphere is of this sort: Information is important to companies because of the value that can be drawn from it, not for the inherent value of the data itself. Companies don’t collect information about you to stalk you, but to better provide goods and services to you.

Moreover, data itself is not only less important than what can be drawn from it, but data is also less important than the underlying product it informs. For instance, Snapchat created a challenger to  Facebook so successfully (and in such short time) that Facebook attempted to buy it for $3 billion (Google offered $4 billion). But Facebook’s interest in Snapchat wasn’t about its data. Instead, Snapchat was valuable — and a competitive challenge to Facebook — because it cleverly incorporated the (apparently novel) insight that many people wanted to share information in a more private way.

Relatedly, Twitter, Instagram, LinkedIn, Yelp, Pinterest (and Facebook itself) all started with little (or no) data and they have had a lot of success. Meanwhile, despite its supposed data advantages, Google’s attempts at social networking — Google+ — have never caught up to Facebook in terms of popularity to users (and thus not to advertisers either). And scrappy social network Ello is starting to build a significant base without data collection for advertising at all.

At the same time it’s simply not the case that the alleged data giants — the ones supposedly insulating themselves behind data barriers to entry — actually have the type of data most relevant to startups anyway. As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use — they came up with an idea for a better mousetrap.The data they have accrued came after they innovated, entered the market and mounted their successful challenges — not before.

In reality, those who complain about data facilitating unassailable competitive advantages have it exactly backwards. Companies need to innovate to attract consumer data, otherwise consumers will switch to competitors (including both new entrants and established incumbents). As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results: The continued explosion of new products, services and other apps is evidence that data is not a bottleneck to competition but a spur to drive it.

Third, competition online is one click or thumb swipe away; that is, barriers to entry and switching costs are low

Somehow, in the face of alleged data barriers to entry, competition online continues to soar, with newcomers constantly emerging and triumphing. This suggests that the barriers to entry are not so high as to prevent robust competition.

Again, despite the supposed data-based monopolies of Facebook, Google, Amazon, Apple and others, there exist powerful competitors in the marketplaces they compete in:

  • If consumers want to make a purchase, they are more likely to do their research on Amazon than Google.
  • Google flight search has failed to seriously challenge — let alone displace —  its competitors, as critics feared. Kayak, Expedia and the like remain the most prominent travel search sites — despite Google having literally purchased ITA’s trove of flight data and data-processing acumen.
  • People looking for local reviews go to Yelp and TripAdvisor (and, increasingly, Facebook) as often as Google.
  • Pinterest, one of the most highly valued startups today, is now a serious challenger to traditional search engines when people want to discover new products.
  • With its recent acquisition of the shopping search engine, TheFind, and test-run of a “buy” button, Facebook is also gearing up to become a major competitor in the realm of e-commerce, challenging Amazon.
  • Likewise, Amazon recently launched its own ad network, “Amazon Sponsored Links,” to challenge other advertising players.

Even assuming for the sake of argument that data creates a barrier to entry, there is little evidence that consumers cannot easily switch to a competitor. While there are sometimes network effects online, like with social networking, history still shows that people will switch. MySpace was considered a dominant network until it made a series of bad business decisions and everyone ended up on Facebook instead. Similarly, Internet users can and do use Bing, DuckDuckGo, Yahoo, and a plethora of more specialized search engines on top of and instead of Google. And don’t forget that Google itself was once an upstart new entrant that replaced once-household names like Yahoo and AltaVista.

Fourth, access to data is not exclusive

Critics like Newman have compared Google to Standard Oil and argued that government authorities need to step in to limit Google’s control over data. But to say data is like oil is a complete misnomer. If Exxon drills and extracts oil from the ground, that oil is no longer available to BP. Data is not finite in the same way. To use an earlier example, Google knowing my birthday doesn’t limit the ability of Facebook to know my birthday, as well. While databases may be proprietary, the underlying data is not. And what matters more than the data itself is how well it is analyzed.

This is especially important when discussing data online, where multi-homing is ubiquitous, meaning many competitors end up voluntarily sharing access to data. For instance, I can use the friend-finder feature on WordPress to find Facebook friends, Google connections, and people I’m following on Twitter who also use the site for blogging. Using this feature allows WordPress to access your contact list on these major online players.

Friend-Finder

Further, it is not apparent that Google’s competitors have less data available to them. Microsoft, for instance, has admitted that it may actually have more data. And, importantly for this discussion, Microsoft may have actually garnered some of its data for Bing from Google.

If Google has a high cost per click, then perhaps it’s because it is worth it to advertisers: There are more eyes on Google because of its superior search product. Contra Newman and Grunes, Google may just be more popular for consumers and advertisers alike because the algorithm makes it more useful, not because it has more data than everyone else.

Fifth, the data barrier to entry argument does not have workable antitrust remedies

The misguided logic of data barrier to entry arguments leaves a lot of questions unanswered. Perhaps most important among these is the question of remedies. What remedy would apply to a company found guilty of leveraging its market power with data?

It’s actually quite difficult to conceive of a practical means for a competition authority to craft remedies that would address the stated concerns without imposing enormous social costs. In the unilateral conduct context, the most obvious remedy would involve the forced sharing of data.

On the one hand, as we’ve noted, it’s not clear this would actually accomplish much. If competitors can’t actually make good use of data, simply having more of it isn’t going to change things. At the same time, such a result would reduce the incentive to build data networks to begin with. In their startup stage, companies like Uber and Facebook required several months and hundreds of thousands, if not millions, of dollars to design and develop just the first iteration of the products consumers love. Would any of them have done it if they had to share their insights? In fact, it may well be that access to these free insights is what competitors actually want; it’s not the data they’re lacking, but the vision or engineering acumen to use it.

Other remedies limiting collection and use of data are not only outside of the normal scope of antitrust remedies, they would also involve extremely costly court supervision and may entail problematic “collisions between new technologies and privacy rights,” as the last year’s White House Report on Big Data and Privacy put it.

It is equally unclear what an antitrust enforcer could do in the merger context. As Commissioner Ohlhausen has argued, blocking specific transactions does not necessarily stop data transfer or promote privacy interests. Parties could simply house data in a standalone entity and enter into licensing arrangements. And conditioning transactions with forced data sharing requirements would lead to the same problems described above.

If antitrust doesn’t provide a remedy, then it is not clear why it should apply at all. The absence of workable remedies is in fact a strong indication that data and privacy issues are not suitable for antitrust. Instead, such concerns would be better dealt with under consumer protection law or by targeted legislation.

The Internet ecosphere relies on data. Information about browsing, purchases and Internet history (among other things) can be very useful for companies that want to reach consumers efficiently. In exchange for giving up some information about their online behavior, consumers enjoy many websites, apps, and other content available on the Internet for free. They also get tailored recommendations when using shopping services like Amazon and eBay, better results when using search engines like Google and Bing, and more relevant advertisements from nearly all websites that rely on ads for revenue.

Things like search, email, cloud services, social networks, blogs, video, and and an enormous range of other content aren’t produced and maintained at zero cost. But Internet users can access almost all of them for free because much of the Internet ecosphere is set up as a two-sided market: Advertisers are brought together with consumers, who get to use online services at no direct cost to them, financed by advertising.

Additionally, data from connected devices are now powering whole new industries of innovative smart products for consumers.  The data from these devices, as well as consumers’ interactions with mobile and traditional Internet applications, are also powering incredible new data-driven insights that benefit not just companies and consumers, but also society at large with new potential answers for some of society’s most difficult problems.

Despite the manifest benefits of this free flow of data, some critics have reasonable concerns about the possible misuse of data, while others see tracking itself as a violation of an asserted right to privacy.

To the extent that they exist, many privacy harms online are currently dealt with by the marketplace itself, bolstered by the Federal Trade Commission under its Section 5 authority as well as state oversight. But some privacy advocates don’t think the FTC or the marketplace have gone far enough, and have pressured Congress to do more. Unfortunately, most (if not all) of these proposals refuse to recognize the successes of the current regime, misunderstand (or perhaps misconstrue) what is involved in data analysis and tracking, overstate the importance of privacy to the average Internet user, and ignore the trade-offs inherent in expanding data regulation.

The Obama Administration’s recently released proposed privacy bill is firmly rooted in this camp. At its core it perpetuates the fantasy that the few consumers who evidence significant concerns about privacy are the norm, and that they irrationally fail to demand it in the marketplace — to such an extent and with such damage to themselves that government must step in (more so than it already does).

But the sorts of alleged problems most directly targeted by the proposed bill simply aren’t substantial problems — or even “problems” at all. Data used by researchers, advertisers and other online entities is already mostly anonymous, and risks of “re-identification” of anonymized data are systematically overstated. In fact, advertisers (to say nothing of health-care and social-science researchers) care less about individual identities than they do consumption patterns and aggregated, broad-based profiles.

Meanwhile the benefits of data analysis are systematically under-appreciated — particularly online, where most consumers likely benefit far more from the current opt-out regime for data tracking than they would from the dramatically expanded control regime outlined in the White House’s proposed bill.

In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….

The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions.  As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.

Absence of economic or cost-benefit analysis

First, and most fundamentally, the Administration’s proposed bill lacks any meaningful cost-benefit analysis, focusing myopically on the alleged costs of data collection and use without considering the business benefits. Even this framing is overly-generous to the bill because the alleged “costs” of big data analytics are in reality benefits to both businesses and consumers. The findings section of the proposal obliquely references these benefits by saying the rules are aimed at

supporting flexibility and the free flow of information, [and] will promote continued innovation and economic growth in the networked economy.

But nowhere do the proposed rules ever connect even these benefits to consumers at all.

The lack of a rigorous cost-benefit analysis has become all-too-common, even at the FTC, the agency that would be charged with enforcing the proposed rules. FTC Commissioner Josh Wright’s dissent in the Commission’s Section 5 “unfairness” action against Apple emphasized this lack of cost-benefit analysis:

The harm from Apple’s disclosure policy is limited to users that actually make unauthorized purchases. However, the potential benefits from Apple’s disclosure choices are available to the entire set of iDevice users because these are the consumers capable of purchasing apps and making in-app purchases. The disparity in the relative magnitudes of these universes of potential harms and benefits suggests, at a minimum, that further analysis is required before the Commission can conclude that it has satisfied its burden of demonstrating that any consumer injury arising from Apple’s allegedly unfair acts or practices exceeds the countervailing benefits to consumers and competition.

Similarly, the proposed bill fails to compare the magnitude of supposed harm befalling a small cadre of privacy-sensitive consumers (who have not otherwise protected themselves by use of  marketplace tools like track-blockers or by use of opt-out options provided by major ad networks and data brokers), to the benefits received by the majority who are less privacy-sensitive.

Failure to consider consumer benefits

One of the hallmarks of the Internet ecosphere has been the diversity of business models designed to enable users to obtain information and services for free once they purchase access from an ISP. This access will likely diminish if content providers are less able to rely on data analytics to help finance and improve their products.

Similarly, because the proposed bill ignores business reality in its largely opt-in approach to privacy (as discussed below), it is insensitive to the deterrent effect on innovation and experimentation. Moreover, the proposed bill does not require the FTC to conduct any such weighing of benefits against harms in implementing the proposed rules.

If companies must seek affirmative consent from users for every new service or for every new use of data that the FTC might deem “unreasonable in light of context” (which is vaguely defined in the proposed bill and, if current practice is any guide, will remain largely undefined by the FTC), the experimentation with new business models (and new uses of data) that lies at the heart of today’s Internet will be imperiled. Denying these benefits — essentially, curtailing the ongoing evolution of online products and, now, connected devices — to consumers would cost them dearly. And yet nothing in the proposed language suggests any meaningful recognition that such lost consumer benefits should be accounted for in assessing the propriety of data-use practices.

It’s possible that the privacy-sensitive among us might be willing to pay for ad-free (and other non-tracking) versions of today’s apps online, and/or bear the cost of finding and using ad- and cookie-blockers. But most people prefer to access apps and content for free, and don’t care much about privacy so long as the personal data they provide is secure and they get something of value in return.

But through its definitions of “personal data” and “de-identified data,” the proposed legislation would likely raise the price (or lower the amount) of content available — typically for free — in the online marketplace. In addition, innovation in the nascent Internet of Things space surely would be stifled, as the proposed bill’s personal data restrictions apply to devices as well.  Persistent identifiers like IP addresses or device numbers, or any other ID that is connected to a device — even if not to the identity of an actual human being — count as personal data.

In a world without transaction costs, it wouldn’t matter if we chose an opt-out or opt-in regime for online advertising: In either situation, the bargain struck between advertisers, content providers and users would result in the “right” level of sharing and using of behavioral data. But, in reality, there are transaction costs.

For example, consumers will face more pervasive notice screens that degrade their experience. Even more significantly, failing to recognize that they must “opt-in” to the benefits of data use would leave them excluded from the benefits of personalization and free content. Changing the default to opt-in (or its equivalent via heightened control and transparency requirements) will have real costs for the vast majority of consumers who are less privacy-sensitive than the hypothetical consumer conjured by the proposed bill.

Without any economic analysis to determine if the number and magnitude of consumers harmed outweighs those who are benefitted by such a change, it makes no sense to tout the legislation as unambiguously pro-consumer. And if it is true (as the weight of evidence strongly suggests) that most consumers are not as privacy-sensitive as they are hungry for data-enabled access to Internet offerings, the legislation can only be harmful on net.

Inconsistency with business realities

Until now, the default assumption of privacy protection enshrined in law is that most restrictions should be on the use of information, rather than its collection. In part this stems from the ubiquity of online tracking, the high costs of opt-in and the many benefits that flow from the vast majority of data uses.

Most current law has been crafted to deal directly with the few specific harms that could arise. But the White House’s new proposed rules may shift that balance by restricting the unauthorized collection of data regardless of use (with a few trivial exceptions), therefore prohibiting beneficial as well as detrimental uses. And one thing it will clearly do is to deter some beneficial uses by increasing the costs of data use across the board.

Further, in completely ignoring algorithms and innovative combinations of data, the bill disregards critical business realities. It has never been the mere collection of data that mattered, nor even the simple agglomeration of lots of data; it’s always been the way data collections are put together and analyzed that has yielded valuable insights. But the focus of the proposed White House bill remains steadfastly on consent for the collection and use of data writ large, without nuanced consideration of the way the market actually employs data.

In other words, the bill fails to recognize the world as it is, and instead brings a blunt “solution” to bear on a complex and nuanced market — all in the name of reducing what is sees as privacy harms, where they may not even exist.

Among other things the bill relies heavily on regulation through Privacy Review Boards (PRBs) — or, as we like to call them, “innovation death panels.” These PRBs would operate under authority of the FTC and would be subject to the bill’s prescriptions regarding the FTC process for granting PRB approval (and ongoing authorization). The bill asserts that sign-off on privacy practices by these boards, once they are given the FTC’s imprimatur, will permit a company’s data privacy practices to avoid regulation under the bill’s “heightened” standards when its practices are “not reasonable in light of context.”

There are several problems with the way the proposed bill handles these rules, but we want to point out just the most salient here: While multi-stakeholder processes could be a good way to build bottom-up law on privacy, the bill’s proposed approach effectively ensures that the PRBs approved by the FTC will operate with review standards that squelch innovation.

The proposed bill requires the FTC to consider a lengthy set of factors in determining whether a PRB is good enough, including:

  • the range of evaluation processes suitable for the privacy risks posed by various types of personal data;
  • the costs and benefits of levels of independence and expertise [of the PRB];
  • the importance of mitigating privacy risks;
  • the importance of expedient determinations; and
  • whether differing requirements are appropriate for Boards that are internal or external to covered entities.

While these parameters may ensure that the approved PRBs demonstrate a strong regard for protecting privacy, only two of the enumerated factors even arguably direct the FTC to consider the cost to businesses or consumers:

  • the range of evaluation processes suitable for covered entities of various sizes, experiences, and resources; and
  • the costs and benefits of levels of transparency and confidentiality.

In other words, the bill’s short-sighted focus on protecting privacy requires the FTC to condition PRB approval on how well the PRBs take account of alleged privacy concerns, not on how well the PRBs tailor their reviews to relevant businesses and markets — and without regard to whether they engender efficient or appropriate privacy practices.

True, there is some marginal concern for cost-benefit tradeoffs built into the proposed legislation — but even what little there is would almost certainly have limited effectiveness.

One section of the proposed bill, Section 103(c), does seem to encourage PRBs to use cost-benefit analysis and perhaps even to forbear from applying heightened transparency and control requirements to certain uses of data:

[A] covered entity [need not] provide heightened transparency and individual control when [it] analyzes personal data in a manner that is not reasonable in light of context if such analysis is supervised by a [PRB] approved by the [FTC] and… [t]he [PRB] determines that the likely benefits of the analysis outweigh the likely privacy risks.

But the proposed bill’s primary opt-in requirement is triggered regardless of PRB review whenever a covered entity offers a different service or employs new modes of data analysis. Under this provision, such changes obligate the company to

provide individuals with compensating controls designed to mitigate privacy risks that may arise from the material changes, which may include seeking express affirmative consent from individuals.

Meanwhile, of course, data analysis that is “unreasonable in light of context” must be undertaken under direct supervision of a PRB that is beholden to the FTC and the proposed bill’s stilted criteria for FTC approval.

In short, the cost-benefit provision is deeply flawed, and the proposed language doesn’t seem likely to allow PRBs to approve any conduct that would deviate from the bill’s prescriptions for enhanced consumer control (as interpreted by the FTC).

There is a clear difference between data brokers, major advertising networks, major content providers and your cousin’s blog. And the evolution of any of these with respect to data analysis and use may confer great and unexpected benefits — and do so in widely divergent ways. And yet it is not clear that any of the limited business-related or cost-benefit provisions in the proposed bill actually direct the FTC to consider the characteristics that really affect business uses — and consumer benefits — in enforcing the bill or in enacting rules under it.

Unintended — and lamentable — consequences

Ironically, the White House bill may actually reduce privacy. Insofar as online businesses do not currently link “real” identifying information with more-anonymous device and IP numbers now, the bill’s rules appear to require companies to do so in order that customers will have the access and accuracy rights that the bill creates. Further, creating databases for such information may create the proverbial “honey pot” for identity thieves, thus increasing data security risks as a result.

And, as noted above, the proposed bill would also harm innovation. The proposed rules subject new uses of personal data and new business models to enhanced consumer control, up to and including mandatory opt-in. In some cases the rules would further subject them to supervision and approval by a PRB (or else the threat of FTC enforcement) — even if such uses would actually or presumptively benefit consumers. This can only deter innovation, both by chilling it in the first place, as well as by forcing innovations to fit the PRBs’ prescriptive mold. Meanwhile, of course, the proposed bill will lead to any number of regulatory-driven innovations that do less to serve the desires of consumers than those of bureaucrats.

The biggest harm to innovation will arise not from the “seen” problems (like erroneous rejection of consumer-benefitting uses of data), but rather from the unseen. Perhaps it will be easy enough for consumers to deal with fewer free apps and content, but the real cost to society will be the apps and content that never come into existence because the bill’s provisions deter their creation in the first place.

So much for the permissionless innovation supposedly at the heart of the net neutrality debate into which the White House interjected itself.

The Administration saw fit to promote rules constraining ISPs in order to ensure that tried-and-true, content-provider business models didn’t face impediments from ISPs — but may now force content providers to devise new ways to fund themselves, substantially transforming how the Internet works.

Bastiat could have been talking about this very bill when he said:

There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen… Yet this difference is tremendous; for it almost always happens that when the immediate consequence is favorable, the later consequences are disastrous, and vice versa. Whence it follows that the bad economist pursues a small present good that will be followed by a great evil to come, while the good economist pursues a great good to come, at the risk of a small present evil.

In short, in a (misguided) attempt to increase privacy in the short run, the White House’s proposed privacy bill ignores the costs to innovation and consumer welfare down the road. And it does so without ever effectively weighing the relative economic costs and benefits of either, or demanding the same from the bill’s enforcers. The bill is simply not a responsible approach to lawmaking.

I did not intend for this to become a series (Part I), but I underestimated the supply of analysis simultaneously invoking “search bias” as an antitrust concept while waving it about untethered from antitrust’s institutional commitment to protecting consumer welfare.  Harvard Business School Professor Ben Edelman offers the latest iteration in this genre.  We’ve criticized his claims regarding search bias and antitrust on precisely these grounds.

For those who have not been following the Google antitrust saga, Google’s critics allege Google’s algorithmic search results “favor” its own services and products over those of rivals in some indefinite, often unspecified, improper manner.  In particular, Professor Edelman and others — including Google’s business rivals — have argued that Google’s “bias” discriminates most harshly against vertical search engine rivals, i.e. rivals offering search specialized search services.   In framing the theory that “search bias” can be a form of anticompetitive exclusion, Edelman writes:

Search bias is a mechanism whereby Google can leverage its dominance in search, in order to achieve dominance in other sectors.  So for example, if Google wants to be dominant in restaurant reviews, Google can adjust search results, so whenever you search for restaurants, you get a Google reviews page, instead of a Chowhound or Yelp page. That’s good for Google, but it might not be in users’ best interests, particularly if the other services have better information, since they’ve specialized in exactly this area and have been doing it for years.

I’ve wondered what model of antitrust-relevant conduct Professor Edelman, an economist, has in mind.  It is certainly well known in both the theoretical and empirical antitrust economics literature that “bias” is neither necessary nor sufficient for a theory of consumer harm; further, it is fairly obvious as a matter of economics that vertical integration can be, and typically is, both efficient and pro-consumer.  Still further, the bulk of economic theory and evidence on these contracts suggest that they are generally efficient and a normal part of the competitive process generating consumer benefits.  Vertically integrated firms may “bias” their own content in ways that increase output; the relevant point is that self-promoting incentives in a vertical relationship can be either efficient or anticompetitive depending on the circumstances of the situation.  The empirical literature suggests that such relationships are mostly pro-competitive and that restrictions upon firms’ ability to enter them generally reduce consumer welfare.  Edelman is an economist, with a Ph.D. from Harvard no less, and so I find it a bit odd that he has framed the “bias” debate outside of this framework, without regard to consumer welfare, and without reference to any of this literature or perhaps even an awareness of it.  Edelman’s approach appears to be a declaration that a search engine’s placement of its own content, algorithmically or otherwise, constitutes an antitrust harm because it may harm rivals — regardless of the consequences for consumers.  Antitrust observers might parallel this view to the antiquated “harm to competitors is harm to competition” approach of antitrust dating back to the 1960s and prior.  These parallels would be accurate.  Edelman’s view is flatly inconsistent with conventional theories of anticompetitive exclusion presently enforced in modern competition agencies or antitrust courts.

But does Edelman present anything more than just a pre-New Learning-era bias against vertical integration?  I’m beginning to have my doubts.  In an interview in Politico (login required), Professor Edelman offers two quotes that illuminate the search-bias antitrust theory — unfavorably.  Professor Edelman begins with what he describes as a “simple” solution to the search bias problem:

I don’t think it’s out of the question given the complexity of what Google has built and its persistence in entering adjacent, ancillary markets. A much simpler approach, if you like things that are simple, would be to disallow Google from entering these adjacent markets. OK, you want to be dominant in search? Stay out of the vertical business, stay out of content.

The problems here should be obvious.  Yes, a per se prohibition on vertical integration by Google into other economic activities would be quite simple; simple and thoroughly destructive.  The mildly more interesting inquiry is what Edelman proposes Google ought provide.  May, under Edelman’s view of a proper regulatory regime, Google answer address search queries by providing a map?  May Google answer product queries with shopping results?  Is the answer to those questions “yes” if and only if Google serves up some one else’s shopping results or map?  What if consumers prefer Google’s shopping result or map because it is more responsive to the query.  Note once again that Edelman’s answers do not turn on consumer welfare.  His answers are a function of the anticipated impact of Google’s choices to engage in those activities upon rival vertical search engines.  Consumer welfare is not the center of Edelman’s analysis; indeed, it is unclear what role consumer welfare plays in Edelman’s analysis at all.  Edelman simply applies his prior presumption that Google’s conduct, even if it produces real gains for consumers, is or should be actionable as an antitrust claim upon a demonstration that Google’s own services are ranked highly on its own search engine — even if Google-affiliated content is ranked highly by other search engines!  (See Danny Sullivan making that point nicely in this post).  Edelman’s proscription ignores the efficiencies of vertical integration and the benefits to consumers entirely.  It may be possible to articulate a coherent anticompetitive theory involving so-called search bias that could then be tested against the real world evidence.  Edelman has not.

Professor Edelman’s other quotation from the profile of the “academic wunderkind” that drew my attention was the following answer in response to the question “which search engine do you use?”  After explaining that he probably uses Google and Bing in proportion to their market shares, Professor Edelman is quoted as saying:

If your house is on fire and you forgot the number for the fire department, I’d encourage you to use Google. When it counts, if Google is one percent better for one percent of searches and both options are free, you’d be crazy not to use it. But if everyone makes that decision, we head towards a monopoly and all the problems experience reveals when a company controls too much.

By my lights, there is no clearer example of the sacrifice of consumer welfare in Edelman’s approach to analyzing whether and how search engines and their results should be regulated.  Note the core of Professor Edelman’s position: if Google offers a superior product favored by all consumers, and if Google gains substantial market share because of this success as determined by consumers, we are collectively headed for serious problems redressable by regulation.  In these circumstances, given the (1) lack of consumer lock-in for search engine use, (2) the overwhelming evidence that vertical integration is generally pro-competitive, and (3) the fact that consumers are generally enjoying the use of free services — one might think that any consumer-minded regulatory approach would carefully attempt to identify and distinguish potentially anticompetitive conduct so as to minimize the burden to consumers from inevitable false positives.  With credit to antitrust and its hard-earned economic discipline, this is the approach suggested by modern antitrust doctrine.  U.S. antitrust law requires a demonstration that consumers will be harmed by a challenged practice — not merely rivals.  It is odd and troubling when an economist abandons the consumer welfare approach; it is yet more peculiar that an economist not only abandons the consumer welfare lodestar but also argues for (or at least presents an unequivocal willingness to accept) an ex ante prohibition on vertical integration altogether in this space.

I’ve no doubt that there are more sophisticated theories of which creative antitrust economists can conceive that come closer to satisfying the requirements of modern antitrust economics by focusing upon consumer welfare.  Certainly, the economists who identify those theories will have their shot at convincing the FTC.  Indeed, Section 5 might even open the door to theories ever-so slightly more creative and more open-ended that those that would be taken seriously in a Sherman Act inquiry.  However, antitrust economists can and should remain intensely focused upon the impact of the conduct at issue — in this case, prominent algorithmic placement of Google’s own affiliated content its rankings — on consumer welfare.  Because Professor Edelman’s views harken to the infamous days of antitrust that cast a pall over any business practice unpleasant for rivals — even if the practice delivered what consumers wanted.  Edelman’s theory is an offer to jeopardize consumers and protect rivals, and to brush the dust off antiquated antitrust theories and standards and apply them to today’s innovative online markets.  Modern antitrust has come a long way in its thinking over the past 50 years — too far to accept these competitor-centric theories of harm.