Archives For technology

On July 24, as part of their newly-announced “Better Deal” campaign, congressional Democrats released an antitrust proposal (“Better Deal Antitrust Proposal” or BDAP) entitled “Cracking Down on Corporate Monopolies and the Abuse of Economic and Political Power.”  Unfortunately, this antitrust tract is really an “Old Deal” screed that rehashes long-discredited ideas about “bigness is badness” and “corporate abuses,” untethered from serious economic analysis.  (In spirit it echoes the proposal for a renewed emphasis on “fairness” in antitrust made by then Acting Assistant Attorney General Renata Hesse in 2016 – a recommendation that ran counter to sound economics, as I explained in a September 2016 Truth on the Market commentary.)  Implementation of the BDAP’s recommendations would be a “worse deal” for American consumers and for American economic vitality and growth.

The BDAP’s Portrayal of the State of Antitrust Enforcement is Factually Inaccurate, and it Ignores the Real Problems of Crony Capitalism and Regulatory Overreach

The Better Deal Antitrust Proposal begins with the assertion that antitrust has failed in recent decades:

Over the past thirty years, growing corporate influence and consolidation has led to reductions in competition, choice for consumers, and bargaining power for workers.  The extensive concentration of power in the hands of a few corporations hurts wages, undermines job growth, and threatens to squeeze out small businesses, suppliers, and new, innovative competitors.  It means higher prices and less choice for the things the American people buy every day. . .  [This is because] [o]ver the last thirty years, courts and permissive regulators have allowed large companies to get larger, resulting in higher prices and limited consumer choice in daily expenses such as travel, cable, and food and beverages.  And because concentrated market power leads to concentrated political power, these companies deploy armies of lobbyists to increase their stranglehold on Washington.  A Better Deal on competition means that we will revisit our antitrust laws to ensure that the economic freedom of all Americans—consumers, workers, and small businesses—come before big corporations that are getting even bigger.

This statement’s assertions are curious (not to mention problematic) in multiple respects.

First, since Democratic administrations have held the White House for sixteen of the past thirty years, the BDAP appears to acknowledge that Democratic presidents have overseen a failed antitrust policy.

Second, the broad claim that consumers have faced higher prices and limited consumer choice with regard to their daily expenses is baseless.  Indeed, internet commerce and new business models have sharply reduced travel and entertainment costs for the bulk of American consumers, and new “high technology” products such as smartphones and electronic games have been characterized by dramatic improvements in innovation, enhanced variety, and relatively lower costs.  Cable suppliers face vibrant competition from competitive satellite providers, fiberoptic cable suppliers (the major telcos such as Verizon), and new online methods for distributing content.  Consumer price inflation has been extremely low in recent decades, compared to the high inflationary, less innovative environment of the 1960s and 1970s – decades when federal antitrust law was applied much more vigorously.  Thus, the claim that weaker antitrust has denied consumers “economic freedom” is at war with the truth.

Third, the claim that recent decades have seen the creation of “concentrated market power,” safe from antitrust challenge, ignores the fact that, over the last three decades, apolitical government antitrust officials under both Democratic and Republican administrations have applied well-accepted economic tools (wielded by the scores of Ph.D. economists in the Justice Department and Federal Trade Commission) in enforcing the antitrust laws.  Antitrust analysis has used economics to focus on inefficient business conduct that would maintain or increase market power, and large numbers of cartels have been prosecuted and questionable mergers (including a variety of major health care and communications industry mergers) have been successfully challenged.  The alleged growth of “concentrated market power,” untouched by incompetent antitrust enforcers, is a myth.  Furthermore, claims that mere corporate size and “aggregate concentration” are grounds for antitrust concern (“big is bad”) were decisively rejected by empirical economic research published in the 1970s, and are no more convincing today.  (As I pointed out in a January 2017 blog posting at this site, recent research by highly respected economists debunks a few claims that federal antitrust enforcers have been “excessively tolerant” of late in analyzing proposed mergers.)

More interesting is the BDAP’s claim that “armies of [corporate] lobbyists” manage to “increase their stranglehold on Washington.”  This is not an antitrust concern, however, but, rather, a complaint against crony capitalism and overregulation, which became an ever more serious problem under the Obama Administration.  As I explained in my October 2016 critique of the American Antitrust Institute’s September 2008 National Competition Policy Report (a Report which is very similar in tone to the BDAP), the rapid growth of excessive regulation during the Obama years has diminished competition by creating new regulatory schemes that benefit entrenched and powerful firms (such as Dodd-Frank Act banking rules that impose excessive burdens on smaller banks).  My critique emphasized that, “as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.”  And, more generally, excessive regulatory burdens undermine the competitive process, by distorting business decisions in a manner that detracts from competition on the merits.

It follows that, if the BDAP really wanted to challenge “unfair” corporate advantages, it would seek to roll back excessive regulation (see my November 2012 article on Trump Administration competition policy).  Indeed, the Trump Administration’s regulatory reform program (which features agency-specific regulatory reform task forces) seeks to do just that.  Perhaps then the BDAP could be rewritten to focus on endorsing President Trump’s regulatory reform initiative, rather than emphasizing a meritless “big is bad” populist antitrust policy that was consigned to the enforcement dustbin decades ago.

The BDAP’s Specific Proposals Would Harm the Economy and Reduce Consumer Welfare

Unfortunately, the BDAP does more than wax nostalgic about old-time “big is bad” antitrust policy.  It affirmatively recommends policy changes that would harm the economy.

First, the BDAP would require “a broader, longer-term view and strong presumptions that market concentration can result in anticompetitive conduct.”  Specifically, it would create “new standards to limit large mergers that unfairly consolidate corporate power,” including “mergers [that] reduce wages, cut jobs, lower product quality, limit access to services, stifle innovation, or hinder the ability of small businesses and entrepreneurs to compete.”  New standards would also “explicitly consider the ways in which control of consumer data can be used to stifle competition or jeopardize consumer privacy.”

Unlike current merger policy, which evaluates likely competitive effects, centered on price and quality, estimated in economically relevant markets, these new standards are open-ended.  They could justify challenges based on such a wide variety of factors that they would incentivize direct competitors not to merge, even in cases where the proposed merged entity would prove more efficient and able to enhance quality or innovation.  Certain less efficient competitors – say small businesses – could argue that they would be driven out of business, or that some jobs in the industry would disappear, in order to prompt government challenges.  But such challenges would tend to undermine innovation and business improvements, and the inevitable redistribution of assets to higher-valued uses that is a key benefit of corporate reorganizations and acquisitions.  (Mergers might focus instead, for example, on inefficient conglomerate acquisitions among companies in unrelated industries, which were incentivized by the overly strict 1960s rules that prohibited mergers among direct competitors.)  Such a change would represent a retreat from economic common sense, and be at odds with consensus economically-sound merger enforcement guidance that U.S. enforcers have long recommended other countries adopt.  Furthermore, questions of consumer data and privacy are more appropriately dealt with as consumer protection questions, which the Federal Trade Commission has handled successfully for years.

Second, the BDAP would require “frequent, independent [after-the-fact] reviews of mergers” and require regulators “to take corrective measures if they find abusive monopolistic conditions where previously approved [consent decree] measures fail to make good on their intended outcomes.”

While high profile mergers subject to significant divestiture or other remedial requirements have in appropriate circumstances included monitoring requirements, the tone of this recommendation is to require that far more mergers be subjected to detailed and ongoing post-acquisition reviews.  The cost of such monitoring is substantial, however, and routine reliance on it (backed by the threat of additional enforcement actions based merely on changing economic conditions) could create excessive caution in the post-merger management of newly-consolidated enterprises.  Indeed, potential merged parties might decide in close cases that this sort of oversight is not worth accepting, and therefore call off potentially efficient transactions that would have enhanced economic welfare.  (The reality of enforcement error cost, and the possibility of misdiagnosis of post-merger competitive conditions, is not acknowledged by the BDAP.)

Third, a newly created “competition advocate” independent of the existing federal antitrust enforcers would be empowered to publicly recommend investigations, with the enforcers required to justify publicly why they chose not to pursue a particular recommended investigation.  The advocate would ensure that antitrust enforcers are held “accountable,” assure that complaints about “market exploitation and anticompetitive conduct” are heard, and publish data on “concentration and abuses of economic power” with demographic breakdowns.

This third proposal is particularly egregious.  It is at odds with the long tradition of prosecutorial discretion that has been enjoyed by the federal antitrust enforcers (and law enforcers in general).  It would also empower a special interest intervenor to promote the complaints of interest groups that object to efficiency-seeking business conduct, thereby undermining the careful economic and legal analysis that is consistently employed by the expert antitrust agencies.  The references to “concentration” and “economic power” clarify that the “advocate” would have an untrammeled ability to highlight non-economic objections to transactions raised by inefficient competitors, jealous rivals, or self-styled populists who object to excessive “bigness.”  This would strike at the heart of our competitive process, which presumes that private parties will be allowed to fulfill their own goals, free from government micromanagement, absent indications of a clear and well-defined violation of law.  In sum, the “competition advocate” is better viewed as a “special interest” advocate empowered to ignore normal legal constraints and unjustifiably interfere in business transactions.  If empowered to operate freely, such an advocate (better viewed as an albatross) would undoubtedly chill a wide variety of business arrangements, to the detriment of consumers and economic innovation.

Finally, the BDAP refers to a variety of ills that are said to affect specific named industries, in particular airlines, cable/telecom, beer, food prices, and eyeglasses.  Airlines are subject to a variety of capacity limitations (limitations on landing slots and the size/number of airports) and regulatory constraints (prohibitions on foreign entry or investment) that may affect competitive conditions, but airlines mergers are closely reviewed by the Justice Department.  Cable and telecom companies face a variety of federal, state, and local regulations, and their mergers also are closely scrutinized.  The BDAP’s reference to the proposed AT&T/Time Warner merger ignores the potential efficiencies of this “vertical” arrangement involving complementary assets (see my coauthored commentary here), and resorts to unsupported claims about wrongful “discrimination” by “behemoths” – issues that in any event are examined in antitrust merger reviews.  Unsupported references to harm to competition and consumer choice are thrown out in the references to beer and agrochemical mergers, which also receive close economically-focused merger scrutiny under existing law.  Concerns raised about the price of eyeglasses ignore the role of potentially anticompetitive regulation – that is, bad government – in harming consumer welfare in this sector.  In short, the alleged competitive “problems” the BDAP raises with respect to particular industries are no more compelling than the rest of its analysis.  The Justice Department and Federal Trade Commission are hard at work applying sound economics to these sectors.  They should be left to do their jobs, and the BDAP’s industry-specific commentary (sadly, like the rest of its commentary) should be accorded no weight.

Conclusion

Congressional Democrats would be well-advised to ditch their efforts to resurrect the counterproductive antitrust policy from days of yore, and instead focus on real economic problems, such as excessive and inappropriate government regulation, as well as weak protection for U.S. intellectual property rights, here and abroad (see here, for example).  Such a change in emphasis would redound to the benefit of American consumers and producers.

 

 

Last week the editorial board of the Washington Post penned an excellent editorial responding to the European Commission’s announcement of its decision in its Google Shopping investigation. Here’s the key language from the editorial:

Whether the demise of any of [the complaining comparison shopping sites] is specifically traceable to Google, however, is not so clear. Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies. Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites…. Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

That’s actually a pretty thorough, if succinct, summary of the basic problems with the Commission’s case (based on its PR and Factsheet, at least; it hasn’t released the full decision yet).

I’ll have more to say on the decision in due course, but for now I want to elaborate on two of the points raised by the WaPo editorial board, both in service of its crucial rejoinder to the Commission that “Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies.”

First, the WaPo editorial board points out that:

Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites.

It is undoubtedly true that users “may well prefer to see a Google-generated list of vendors first.” It’s also crucial to understanding the changes in Google’s search results page that have given rise to the current raft of complaints.

As I noted in a Wall Street Journal op-ed two years ago:

It’s a mistake to consider “general search” and “comparison shopping” or “product search” to be distinct markets.

From the moment it was technologically feasible to do so, Google has been adapting its traditional search results—that familiar but long since vanished page of 10 blue links—to offer more specialized answers to users’ queries. Product search, which is what is at issue in the EU complaint, is the next iteration in this trend.

Internet users today seek information from myriad sources: Informational sites (Wikipedia and the Internet Movie Database); review sites (Yelp and TripAdvisor); retail sites (Amazon and eBay); and social-media sites (Facebook and Twitter). What do these sites have in common? They prioritize certain types of data over others to improve the relevance of the information they provide.

“Prioritization” of Google’s own shopping results, however, is the core problem for the Commission:

Google has systematically given prominent placement to its own comparison shopping service: when a consumer enters a query into the Google search engine in relation to which Google’s comparison shopping service wants to show results, these are displayed at or near the top of the search results. (Emphasis in original).

But this sort of prioritization is the norm for all search, social media, e-commerce and similar platforms. And this shouldn’t be a surprise: The value of these platforms to the user is dependent upon their ability to sort the wheat from the chaff of the now immense amount of information coursing about the Web.

As my colleagues and I noted in a paper responding to a methodologically questionable report by Tim Wu and Yelp leveling analogous “search bias” charges in the context of local search results:

Google is a vertically integrated company that offers general search, but also a host of other products…. With its well-developed algorithm and wide range of products, it is hardly surprising that Google can provide not only direct answers to factual questions, but also a wide range of its own products and services that meet users’ needs. If consumers choose Google not randomly, but precisely because they seek to take advantage of the direct answers and other options that Google can provide, then removing the sort of “bias” alleged by [complainants] would affirmatively hurt, not help, these users. (Emphasis added).

And as Josh Wright noted in an earlier paper responding to yet another set of such “search bias” charges (in that case leveled in a similarly methodologically questionable report by Benjamin Edelman and Benjamin Lockwood):

[I]t is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather individual competitors and websites. Edelman & Lockwood´s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content. However, it is not useful from an antitrust policy perspective because it erroneously—and contrary to economic theory and evidence—presumes natural and procompetitive product differentiation in search rankings to be inherently harmful. (Emphasis added).

We’ll have to see what kind of analysis the Commission relies upon in its decision to reach its conclusion that prioritization is an antitrust problem, but there is reason to be skeptical that it will turn out to be compelling. The Commission states in its PR that:

The evidence shows that consumers click far more often on results that are more visible, i.e. the results appearing higher up in Google’s search results. Even on a desktop, the ten highest-ranking generic search results on page 1 together generally receive approximately 95% of all clicks on generic search results (with the top result receiving about 35% of all the clicks). The first result on page 2 of Google’s generic search results receives only about 1% of all clicks. This cannot just be explained by the fact that the first result is more relevant, because evidence also shows that moving the first result to the third rank leads to a reduction in the number of clicks by about 50%. The effects on mobile devices are even more pronounced given the much smaller screen size.

This means that by giving prominent placement only to its own comparison shopping service and by demoting competitors, Google has given its own comparison shopping service a significant advantage compared to rivals. (Emphasis added).

Whatever truth there is in the characterization that placement is more important than relevance in influencing user behavior, the evidence cited by the Commission to demonstrate that doesn’t seem applicable to what’s happening on Google’s search results page now.

Most crucially, the evidence offered by the Commission refers only to how placement affects clicks on “generic search results” and glosses over the fact that the “prominent placement” of Google’s “results” is not only a difference in position but also in the type of result offered.

Google Shopping results (like many of its other “vertical results” and direct answers) are very different than the 10 blue links of old. These “universal search” results are, for one thing, actual answers rather than merely links to other sites. They are also more visually rich and attractively and clearly displayed.

Ironically, Tim Wu and Yelp use the claim that users click less often on Google’s universal search results to support their contention that increased relevance doesn’t explain Google’s prioritization of its own content. Yet, as we note in our response to their study:

[I]f a consumer is using a search engine in order to find a direct answer to a query rather than a link to another site to answer it, click-through would actually represent a decrease in consumer welfare, not an increase.

In fact, the study fails to incorporate this dynamic even though it is precisely what the authors claim the study is measuring.

Further, as the WaPo editorial intimates, these universal search results (including Google Shopping results) are quite plausibly more valuable to users. As even Tim Wu and Yelp note:

No one truly disagrees that universal search, in concept, can be an important innovation that can serve consumers.

Google sees it exactly this way, of course. Here’s Tim Wu and Yelp again:

According to Google, a principal difference between the earlier cases and its current conduct is that universal search represents a pro-competitive, user-serving innovation. By deploying universal search, Google argues, it has made search better. As Eric Schmidt argues, “if we know the answer it is better for us to answer that question so [the user] doesn’t have to click anywhere, and in that sense we… use data sources that are our own because we can’t engineer it any other way.”

Of course, in this case, one would expect fewer clicks to correlate with higher value to users — precisely the opposite of the claim made by Tim Wu and Yelp, which is the surest sign that their study is faulty.

But the Commission, at least according to the evidence cited in its PR, doesn’t even seem to measure the relative value of the very different presentations of information at all, instead resting on assertions rooted in the irrelevant difference in user propensity to click on generic (10 blue links) search results depending on placement.

Add to this Pinar Akman’s important point that Google Shopping “results” aren’t necessarily search results at all, but paid advertising:

[O]nce one appreciates the fact that Google’s shopping results are simply ads for products and Google treats all ads with the same ad-relevant algorithm and all organic results with the same organic-relevant algorithm, the Commission’s order becomes impossible to comprehend. Is the Commission imposing on Google a duty to treat non-sponsored results in the same way that it treats sponsored results? If so, does this not provide an unfair advantage to comparison shopping sites over, for example, Google’s advertising partners as well as over Amazon, eBay, various retailers, etc…?

Randy Picker also picks up on this point:

But those Google shopping boxes are ads, Picker told me. “I can’t imagine what they’re thinking,” he said. “Google is in the advertising business. That’s how it makes its money. It has no obligation to put other people’s ads on its website.”

The bottom line here is that the WaPo editorial board does a better job characterizing the actual, relevant market dynamics in a single sentence than the Commission seems to have done in its lengthy releases summarizing its decision following seven full years of investigation.

The second point made by the WaPo editorial board to which I want to draw attention is equally important:

Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

The Commission dismisses this argument in its Factsheet:

The Commission Decision concerns the effect of Google’s practices on comparison shopping markets. These offer a different service to merchant platforms, such as Amazon and eBay. Comparison shopping services offer a tool for consumers to compare products and prices online and find deals from online retailers of all types. By contrast, they do not offer the possibility for products to be bought on their site, which is precisely the aim of merchant platforms. Google’s own commercial behaviour reflects these differences – merchant platforms are eligible to appear in Google Shopping whereas rival comparison shopping services are not.

But the reality is that “comparison shopping,” just like “general search,” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google (or Foundem, or Amazon, or Facebook…) happens to use doesn’t reflect the extent of substitutability between these different mechanisms.

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive. The same goes for comparison shopping.

And the fact that Amazon and eBay “offer the possibility for products to be bought on their site” doesn’t take away from the fact that they also “offer a tool for consumers to compare products and prices online and find deals from online retailers of all types.” Not only do these sites contain enormous amounts of valuable (and well-presented) information about products, including product comparisons and consumer reviews, but they also actually offer comparisons among retailers. In fact, Fifty percent of the items sold through Amazon’s platform, for example, are sold by third-party retailers — the same sort of retailers that might also show up on a comparison shopping site.

More importantly, though, as the WaPo editorial rightly notes, “[t]hose who aren’t happy anyway have other options.” Google just isn’t the indispensable gateway to the Internet (and definitely not to shopping on the Internet) that the Commission seems to think.

Today over half of product searches in the US start on Amazon. The majority of web page referrals come from Facebook. Yelp’s most engaged users now access it via its app (which has seen more than 3x growth in the past five years). And a staggering 40 percent of mobile browsing on both Android and iOS now takes place inside the Facebook app.

Then there are “closed” platforms like the iTunes store and innumerable other apps that handle copious search traffic (including shopping-related traffic) but also don’t figure in the Commission’s analysis, apparently.

In fact, billions of users reach millions of companies every day through direct browser navigation, social media, apps, email links, review sites, blogs, and countless other means — all without once touching Google.com. So-called “dark social” interactions (email, text messages, and IMs) drive huge amounts of some of the most valuable traffic on the Internet, in fact.

All of this, in turn, has led to a competitive scramble to roll out completely new technologies to meet consumers’ informational (and merchants’ advertising) needs. The already-arriving swarm of VR, chatbots, digital assistants, smart-home devices, and more will offer even more interfaces besides Google through which consumers can reach their favorite online destinations.

The point is this: Google’s competitors complaining that the world is evolving around them don’t need to rely on Google. That they may choose to do so does not saddle Google with an obligation to ensure that they can always do so.

Antitrust laws — in Europe, no less than in the US — don’t require Google or any other firm to make life easier for competitors. That’s especially true when doing so would come at the cost of consumer-welfare-enhancing innovations. The Commission doesn’t seem to have grasped this fundamental point, however.

The WaPo editorial board gets it, though:

The immense size and power of all Internet giants are a legitimate focus for the antitrust authorities on both sides of the Atlantic. Brussels vs. Google, however, seems to be a case of punishment without crime.

I recently published a piece in the Hill welcoming the Canadian Supreme Court’s decision in Google v. Equustek. In this post I expand (at length) upon my assessment of the case.

In its decision, the Court upheld injunctive relief against Google, directing the company to avoid indexing websites offering the infringing goods in question, regardless of the location of the sites (and even though Google itself was not a party in the case nor in any way held liable for the infringement). As a result, the Court’s ruling would affect Google’s conduct outside of Canada as well as within it.

The case raises some fascinating and thorny issues, but, in the end, the Court navigated them admirably.

Some others, however, were not so… welcoming of the decision (see, e.g., here and here).

The primary objection to the ruling seems to be, in essence, that it is the top of a slippery slope: “If Canada can do this, what’s to stop Iran or China from doing it? Free expression as we know it on the Internet will cease to exist.”

This is a valid concern, of course — in the abstract. But for reasons I explain below, we should see this case — and, more importantly, the approach adopted by the Canadian Supreme Court — as reassuring, not foreboding.

Some quick background on the exercise of extraterritorial jurisdiction in international law

The salient facts in, and the fundamental issue raised by, the case were neatly summarized by Hugh Stephens:

[The lower Court] issued an interim injunction requiring Google to de-index or delist (i.e. not return search results for) the website of a firm (Datalink Gateways) that was marketing goods online based on the theft of trade secrets from Equustek, a Vancouver, B.C., based hi-tech firm that makes sophisticated industrial equipment. Google wants to quash a decision by the lower courts on several grounds, primarily that the basis of the injunction is extra-territorial in nature and that if Google were to be subject to Canadian law in this case, this could open a Pandora’s box of rulings from other jurisdictions that would require global delisting of websites thus interfering with freedom of expression online, and in effect “break the Internet”.

The question of jurisdiction with regard to cross-border conduct is clearly complicated and evolving. But, in important ways, it isn’t anything new just because the Internet is involved. As Jack Goldsmith and Tim Wu (yes, Tim Wu) wrote (way back in 2006) in Who Controls the Internet?: Illusions of a Borderless World:

A government’s responsibility for redressing local harms caused by a foreign source does not change because the harms are caused by an Internet communication. Cross-border harms that occur via the Internet are not any different than those outside the Net. Both demand a response from governmental authorities charged with protecting public values.

As I have written elsewhere, “[g]lobal businesses have always had to comply with the rules of the territories in which they do business.”

Traditionally, courts have dealt with the extraterritoriality problem by applying a rule of comity. As my colleague, Geoffrey Manne (Founder and Executive Director of ICLE), reminds me, the principle of comity largely originated in the work of the 17th Century Dutch legal scholar, Ulrich Huber. Huber wrote that comitas gentium (“courtesy of nations”) required the application of foreign law in certain cases:

[Sovereigns will] so act by way of comity that rights acquired within the limits of a government retain their force everywhere so far as they do not cause prejudice to the powers or rights of such government or of their subjects.

And, notably, Huber wrote that:

Although the laws of one nation can have no force directly with another, yet nothing could be more inconvenient to commerce and to international usage than that transactions valid by the law of one place should be rendered of no effect elsewhere on account of a difference in the law.

The basic principle has been recognized and applied in international law for centuries. Of course, the flip side of the principle is that sovereign nations also get to decide for themselves whether to enforce foreign law within their jurisdictions. To summarize Huber (as well as Lord Mansfield, who brought the concept to England, and Justice Story, who brought it to the US):

All three jurists were concerned with deeply polarizing public issues — nationalism, religious factionalism, and slavery. For each, comity empowered courts to decide whether to defer to foreign law out of respect for a foreign sovereign or whether domestic public policy should triumph over mere courtesy. For each, the court was the agent of the sovereign’s own public law.

The Canadian Supreme Court’s well-reasoned and admirably restrained approach in Equustek

Reconciling the potential conflict between the laws of Canada and those of other jurisdictions was, of course, a central subject of consideration for the Canadian Court in Equustek. The Supreme Court, as described below, weighed a variety of factors in determining the appropriateness of the remedy. In analyzing the competing equities, the Supreme Court set out the following framework:

[I]s there a serious issue to be tried; would the person applying for the injunction suffer irreparable harm if the injunction were not granted; and is the balance of convenience in favour of granting the interlocutory injunction or denying it. The fundamental question is whether the granting of an injunction is just and equitable in all of the circumstances of the case. This will necessarily be context-specific. [Here, as throughout this post, bolded text represents my own, added emphasis.]

Applying that standard, the Court held that because ordering an interlocutory injunction against Google was the only practical way to prevent Datalink from flouting the court’s several orders, and because there were no sufficient, countervailing comity or freedom of expression concerns in this case that would counsel against such an order being granted, the interlocutory injunction was appropriate.

I draw particular attention to the following from the Court’s opinion:

Google’s argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction is, with respect, theoretical. As Fenlon J. noted, “Google acknowledges that most countries will likely recognize intellectual property rights and view the selling of pirated products as a legal wrong”.

And while it is always important to pay respectful attention to freedom of expression concerns, particularly when dealing with the core values of another country, I do not see freedom of expression issues being engaged in any way that tips the balance of convenience towards Google in this case. As Groberman J.A. concluded:

In the case before us, there is no realistic assertion that the judge’s order will offend the sensibilities of any other nation. It has not been suggested that the order prohibiting the defendants from advertising wares that violate the intellectual property rights of the plaintiffs offends the core values of any nation. The order made against Google is a very limited ancillary order designed to ensure that the plaintiffs’ core rights are respected.

In fact, as Andrew Keane Woods writes at Lawfare:

Under longstanding conflicts of laws principles, a court would need to weigh the conflicting and legitimate governments’ interests at stake. The Canadian court was eager to undertake that comity analysis, but it couldn’t do so because the necessary ingredient was missing: there was no conflict of laws.

In short, the Canadian Supreme Court, while acknowledging the importance of comity and appropriate restraint in matters with extraterritorial effect, carefully weighed the equities in this case and found that they favored the grant of extraterritorial injunctive relief. As the Court explained:

Datalink [the direct infringer] and its representatives have ignored all previous court orders made against them, have left British Columbia, and continue to operate their business from unknown locations outside Canada. Equustek has made efforts to locate Datalink with limited success. Datalink is only able to survive — at the expense of Equustek’s survival — on Google’s search engine which directs potential customers to Datalink’s websites. This makes Google the determinative player in allowing the harm to occur. On balance, since the world‑wide injunction is the only effective way to mitigate the harm to Equustek pending the trial, the only way, in fact, to preserve Equustek itself pending the resolution of the underlying litigation, and since any countervailing harm to Google is minimal to non‑existent, the interlocutory injunction should be upheld.

As I have stressed, key to the Court’s reasoning was its close consideration of possible countervailing concerns and its entirely fact-specific analysis. By the very terms of the decision, the Court made clear that its balancing would not necessarily lead to the same result where sensibilities or core values of other nations would be offended. In this particular case, they were not.

How critics of the decision (and there are many) completely miss the true import of the Court’s reasoning

In other words, the holding in this case was a function of how, given the facts of the case, the ruling would affect the particular core concerns at issue: protection and harmonization of global intellectual property rights on the one hand, and concern for the “sensibilities of other nations,” including their concern for free expression, on the other.

This should be deeply reassuring to those now criticizing the decision. And yet… it’s not.

Whether because they haven’t actually read or properly understood the decision, or because they are merely grandstanding, some commenters are proclaiming that the decision marks the End Of The Internet As We Know It — you know, it’s going to break the Internet. Or something.

Human Rights Watch, an organization I generally admire, issued a statement including the following:

The court presumed no one could object to delisting someone it considered an intellectual property violator. But other countries may soon follow this example, in ways that more obviously force Google to become the world’s censor. If every country tries to enforce its own idea of what is proper to put on the Internet globally, we will soon have a race to the bottom where human rights will be the loser.

The British Columbia Civil Liberties Association added:

Here it was technical details of a product, but you could easily imagine future cases where we might be talking about copyright infringement, or other things where people in private lawsuits are wanting things to be taken down off  the internet that are more closely connected to freedom of expression.

From the other side of the traditional (if insufficiently nuanced) “political spectrum,” AEI’s Ariel Rabkin asserted that

[O]nce we concede that Canadian courts can regulate search engine results in Turkey, it is hard to explain why a Turkish court shouldn’t have the reciprocal right. And this is no hypothetical — a Turkish court has indeed ordered Twitter to remove a user (AEI scholar Michael Rubin) within the United States for his criticism of Erdogan. Once the jurisdictional question is decided, it is no use raising free speech as an issue. Other countries do not have our free speech norms, nor Canada’s. Once Canada concedes that foreign courts have the right to regulate Canadian search results, they are on the internet censorship train, and there is no egress before the end of the line.

In this instance, in particular, it is worth noting not only the complete lack of acknowledgment of the Court’s articulated constraints on taking action with extraterritorial effect, but also the fact that Turkey (among others) has hardly been waiting for approval from Canada before taking action.   

And then there’s EFF (of course). EFF, fairly predictably, suggests first — with unrestrained hyperbole — that the Supreme Court held that:

A country has the right to prevent the world’s Internet users from accessing information.

Dramatic hyperbole aside, that’s also a stilted way to characterize the content at issue in the case. But it is important to EFF’s misleading narrative to begin with the assertion that offering infringing products for sale is “information” to which access by the public is crucial. But, of course, the distribution of infringing products is hardly “expression,” as most of us would understand that term. To claim otherwise is to denigrate the truly important forms of expression that EFF claims to want to protect.

And, it must be noted, even if there were expressive elements at issue, infringing “expression” is always subject to restriction under the copyright laws of virtually every country in the world (and free speech laws, where they exist).

Nevertheless, EFF writes that the decision:

[W]ould cut off access to information for U.S. users would set a dangerous precedent for online speech. In essence, it would expand the power of any court in the world to edit the entire Internet, whether or not the targeted material or site is lawful in another country. That, we warned, is likely to result in a race to the bottom, as well-resourced individuals engage in international forum-shopping to impose the one country’s restrictive laws regarding free expression on the rest of the world.

Beyond the flaws of the ruling itself, the court’s decision will likely embolden other countries to try to enforce their own speech-restricting laws on the Internet, to the detriment of all users. As others have pointed out, it’s not difficult to see repressive regimes such as China or Iran use the ruling to order Google to de-index sites they object to, creating a worldwide heckler’s veto.

As always with EFF missives, caveat lector applies: None of this is fair or accurate. EFF (like the other critics quoted above) is looking only at the result — the specific contours of the global order related to the Internet — and not to the reasoning of the decision itself.

Quite tellingly, EFF urges its readers to ignore the case in front of them in favor of a theoretical one. That is unfortunate. Were EFF, et al. to pay closer attention, they would be celebrating this decision as a thoughtful, restrained, respectful, and useful standard to be employed as a foundational decision in the development of global Internet governance.

The Canadian decision is (as I have noted, but perhaps still not with enough repetition…) predicated on achieving equity upon close examination of the facts, and giving due deference to the sensibilities and core values of other nations in making decisions with extraterritorial effect.

Properly understood, the ruling is a shield against intrusions that undermine freedom of expression, and not an attack on expression.

EFF subverts the reasoning of the decision and thus camouflages its true import, all for the sake of furthering its apparently limitless crusade against all forms of intellectual property. The ruling can be read as an attack on expression only if one ascribes to the distribution of infringing products the status of protected expression — so that’s what EFF does. But distribution of infringing products is not protected expression.

Extraterritoriality on the Internet is complicated — but that undermines, rather than justifies, critics’ opposition to the Court’s analysis

There will undoubtedly be other cases that present more difficult challenges than this one in defining the jurisdictional boundaries of courts’ abilities to address Internet-based conduct with multi-territorial effects. But the guideposts employed by the Supreme Court of Canada will be useful in informing such decisions.

Of course, some states don’t (or won’t, when it suits them), adhere to principles of comity. But that was true long before the Equustek decision. And, frankly, the notion that this decision gives nations like China or Iran political cover for global censorship is ridiculous. Nations that wish to censor the Internet will do so regardless. If anything, reference to this decision (which, let me spell it out again, highlights the importance of avoiding relief that would interfere with core values or sensibilities of other nations) would undermine their efforts.

Rather, the decision will be far more helpful in combating censorship and advancing global freedom of expression. Indeed, as noted by Hugh Stephens in a recent blog post:

While the EFF, echoed by its Canadian proxy OpenMedia, went into hyperventilation mode with the headline, “Top Canadian Court permits Worldwide Internet Censorship”, respected organizations like the Canadian Civil Liberties Association (CCLA) welcomed the decision as having achieved the dual objectives of recognizing the importance of freedom of expression and limiting any order that might violate that fundamental right. As the CCLA put it,

While today’s decision upholds the worldwide order against Google, it nevertheless reflects many of the freedom of expression concerns CCLA had voiced in our interventions in this case.

As I noted in my piece in the Hill, this decision doesn’t answer all of the difficult questions related to identifying proper jurisdiction and remedies with respect to conduct that has global reach; indeed, that process will surely be perpetually unfolding. But, as reflected in the comments of the Canadian Civil Liberties Association, it is a deliberate and well-considered step toward a fair and balanced way of addressing Internet harms.

With apologies for quoting myself, I noted the following in an earlier piece:

I’m not unsympathetic to Google’s concerns. As a player with a global footprint, Google is legitimately concerned that it could be forced to comply with the sometimes-oppressive and often contradictory laws of countries around the world. But that doesn’t make it — or any other Internet company — unique. Global businesses have always had to comply with the rules of the territories in which they do business… There will be (and have been) cases in which taking action to comply with the laws of one country would place a company in violation of the laws of another. But principles of comity exist to address the problem of competing demands from sovereign governments.

And as Andrew Keane Woods noted:

Global takedown orders with no limiting principle are indeed scary. But Canada’s order has a limiting principle. As long as there is room for Google to say to Canada (or France), “Your order will put us in direct and significant violation of U.S. law,” the order is not a limitless assertion of extraterritorial jurisdiction. In the instance that a service provider identifies a conflict of laws, the state should listen.

That is precisely what the Canadian Supreme Court’s decision contemplates.

No one wants an Internet based on the lowest common denominator of acceptable speech. Yet some appear to want an Internet based on the lowest common denominator for the protection of original expression. These advocates thus endorse theories of jurisdiction that would deny societies the ability to enforce their own laws, just because sometimes those laws protect intellectual property.

And yet that reflects little more than an arbitrary prioritization of those critics’ personal preferences. In the real world (including the real online world), protection of property is an important value, deserving reciprocity and courtesy (comity) as much as does speech. Indeed, the G20 Digital Economy Ministerial Declaration adopted in April of this year recognizes the importance to the digital economy of promoting security and trust, including through the provision of adequate and effective intellectual property protection. Thus the Declaration expresses the recognition of the G20 that:

[A]pplicable frameworks for privacy and personal data protection, as well as intellectual property rights, have to be respected as they are essential to strengthening confidence and trust in the digital economy.

Moving forward in an interconnected digital universe will require societies to make a series of difficult choices balancing both competing values and competing claims from different jurisdictions. Just as it does in the offline world, navigating this path will require flexibility and skepticism (if not rejection) of absolutism — including with respect to the application of fundamental values. Even things like freedom of expression, which naturally require a balancing of competing interests, will need to be reexamined. We should endeavor to find that fine line between allowing individual countries to enforce their own national judgments and a tolerance for those countries that have made different choices. This will not be easy, as well manifested in something that Alice Marwick wrote earlier this year:

But a commitment to freedom of speech above all else presumes an idealistic version of the internet that no longer exists. And as long as we consider any content moderation to be censorship, minority voices will continue to be drowned out by their aggressive majority counterparts.

* * *

We need to move beyond this simplistic binary of free speech/censorship online. That is just as true for libertarian-leaning technologists as it is neo-Nazi provocateurs…. Aggressive online speech, whether practiced in the profanity and pornography-laced environment of 4Chan or the loftier venues of newspaper comments sections, positions sexism, racism, and anti-Semitism (and so forth) as issues of freedom of expression rather than structural oppression.

Perhaps we might want to look at countries like Canada and the United Kingdom, which take a different approach to free speech than does the United States. These countries recognize that unlimited free speech can lead to aggression and other tactics which end up silencing the speech of minorities — in other words, the tyranny of the majority. Creating online communities where all groups can speak may mean scaling back on some of the idealism of the early internet in favor of pragmatism. But recognizing this complexity is an absolutely necessary first step.

While I (and the Canadian Supreme Court, for that matter) share EFF’s unease over the scope of extraterritorial judgments, I fundamentally disagree with EFF that the Equustek decision “largely sidesteps the question of whether such a global order would violate foreign law or intrude on Internet users’ free speech rights.”

In fact, it is EFF’s position that comes much closer to a position indifferent to the laws and values of other countries; in essence, EFF’s position would essentially always prioritize the particular speech values adopted in the US, regardless of whether they had been adopted by the countries affected in a dispute. It is therefore inconsistent with the true nature of comity.

Absolutism and exceptionalism will not be a sound foundation for achieving global consensus and the effective operation of law. As stated by the Canadian Supreme Court in Equustek, courts should enforce the law — whatever the law is — to the extent that such enforcement does not substantially undermine the core sensitivities or values of nations where the order will have effect.

EFF ignores the process in which the Court engaged precisely because EFF — not another country, but EFF — doesn’t find the enforcement of intellectual property rights to be compelling. But that unprincipled approach would naturally lead in a different direction where the court sought to protect a value that EFF does care about. Such a position arbitrarily elevates EFF’s idiosyncratic preferences. That is simply not a viable basis for constructing good global Internet governance.

If the Internet is both everywhere and nowhere, our responses must reflect that reality, and be based on the technology-neutral application of laws, not the abdication of responsibility premised upon an outdated theory of tech exceptionalism under which cyberspace is free from the application of the laws of sovereign nations. That is not the path to either freedom or prosperity.

To realize the economic and social potential of the Internet, we must be guided by both a determination to meaningfully address harms, and a sober reservation about interfering in the affairs of other states. The Supreme Court of Canada’s decision in Google v. Equustek has planted a flag in this space. It serves no one to pretend that the Court decided that a country has the unfettered right to censor the Internet. That’s not what it held — and we should be grateful for that. To suggest otherwise may indeed be self-fulfilling.

Regardless of the merits and soundness (or lack thereof) of this week’s European Commission Decision in the Google Shopping case — one cannot assess this until we have the text of the decision — two comments really struck me during the press conference.

First, it was said that Google’s conduct had essentially reduced innovation. If I heard correctly, this is a formidable statement. In 2016, another official EU service published stats that described Alphabet as increasing its R&D by 22% and ranked it as the world’s 4th top R&D investor. Sure it can always be better. And sure this does not excuse everything. But still. The press conference language on incentives to innovate was a bit of an oversell, to say the least.

Second, the Commission views this decision as a “precedent” or as a “framework” that will inform the way dominant Internet platforms should display, intermediate and market their services and those of their competitors. This may fuel additional complaints by other vertical search rivals against (i) Google in relation to other product lines, but also against (ii) other large platform players.

Beyond this, the Commission’s approach raises a gazillion questions of law and economics. Pending the disclosure of the economic evidence in the published decision, let me share some thoughts on a few (arbitrarily) selected legal issues.

First, the Commission has drawn the lesson of the Microsoft remedy quagmire. The Commission refrains from using a trustee to ensure compliance with the decision. This had been a bone of contention in the 2007 Microsoft appeal. Readers will recall that the Commission had imposed on Microsoft to appoint a monitoring trustee, who was supposed to advise on possible infringements in the implementation of the decision. On appeal, the Court eventually held that the Commission was solely responsible for this, and could not delegate those powers. Sure, the Commission could “retai[n] its own external expert to provide advice when it investigates the implementation of the remedies.” But no more than that.

Second, we learn that the Commission is no longer in the business of software design. Recall the failed untying of WMP and Windows — Windows Naked sold only 11,787 copies, likely bought by tech bootleggers willing to acquire the first piece of software ever designed by antitrust officials — or the browser “Choice Screen” compliance saga which eventually culminated with a €561 million fine. Nothing of this can be found here. The Commission leaves remedial design to the abstract concept of “equal treatment”.[1] This, certainly, is a (relatively) commendable approach, and one that could inspire remedies in other unilateral conduct cases, in particular, exploitative conduct ones where pricing remedies are both costly, impractical, and consequentially inefficient.

On the other hand, readers will also not fail to see the corollary implication of “equal treatment”: search neutrality could actually cut both ways, and lead to a lawful degradation in consumer welfare if Google were ever to decide to abandon rich format displays for both its own shopping services and those of rivals.

Third, neither big data nor algorithmic design is directly vilified in the case (“The Commission Decision does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages”). In fact, the Commission objects to the selective application of Google’s generic search algorithms to its own products. This is an interesting, and subtle, clarification given all the coverage that this topic has attracted in recent antitrust literature. We are in fact very close to a run of the mill claim of disguised market manipulation, not causally related to data or algorithmic technology.

Fourth, Google said it contemplated a possible appeal of the decision. Now, here’s a challenging question: can an antitrust defendant effectively exercise its right to judicial review of an administrative agency (and more generally its rights of defense), when it operates under the threat of antitrust sanctions in ongoing parallel cases investigated by the same agency (i.e., the antitrust inquiries related to Android and Ads)? This question cuts further than the Google Shopping case. Say firm A contemplates a merger with firm B in market X, while it is at the same time subject to antitrust investigations in market Z. And assume that X and Z are neither substitutes nor complements so there is little competitive relationship between both products. Can the Commission leverage ongoing antitrust investigations in market Z to extract merger concessions in market X? Perhaps more to the point, can the firm interact with the Commission as if the investigations are completely distinct, or does it have to play a more nuanced game and consider the ramifications of its interactions with the Commission in both markets?

Fifth, as to the odds of a possible appeal, I don’t believe that arguments on the economic evidence or legal theory of liability will ever be successful before the General Court of the EU. The law and doctrine in unilateral conduct cases are disturbingly — and almost irrationally — severe. As I have noted elsewhere, the bottom line in the EU case-law on unilateral conduct is to consider the genuine requirement of “harm to competition” as a rhetorical question, not an empirical one. In EU unilateral conduct law, exclusion of every and any firm is a per se concern, regardless of evidence of efficiency, entry or rivalry.

In turn, I tend to opine that Google has a stronger game from a procedural standpoint, having been left with (i) the expectation of a settlement (it played ball three times by making proposals); (ii) a corollary expectation of the absence of a fine (settlement discussions are not appropriate for cases that could end with fines); and (iii) a full seven long years of an investigatory cloud. We know from the past that EU judges like procedural issues, but like comparably less to debate the substance of the law in unilateral conduct cases. This case could thus be a test case in terms of setting boundaries on how freely the Commission can U-turn a case (the Commissioner said “take the case forward in a different way”).

It’s fitting that FCC Chairman Ajit Pai recently compared his predecessor’s jettisoning of the FCC’s light touch framework for Internet access regulation without hard evidence to the Oklahoma City Thunder’s James Harden trade. That infamous deal broke up a young nucleus of three of the best players in the NBA in 2012 because keeping all three might someday create salary cap concerns. What few saw coming was a new TV deal in 2015 that sent the salary cap soaring.

If it’s hard to predict how the market will evolve in the closed world of professional basketball, predictions about the path of Internet innovation are an order of magnitude harder — especially for those making crucial decisions with a lot of money at stake.

The FCC’s answer for what it considered to be the dangerous unpredictability of Internet innovation was to write itself a blank check of authority to regulate ISPs in the 2015 Open Internet Order (OIO), embodied in what is referred to as the “Internet conduct standard.” This standard expanded the scope of Internet access regulation well beyond the core principle of preserving openness (i.e., ensuring that any legal content can be accessed by all users) by granting the FCC the unbounded, discretionary authority to define and address “new and novel threats to the Internet.”

When asked about what the standard meant (not long after writing it), former Chairman Tom Wheeler replied,

We don’t really know. We don’t know where things will go next. We have created a playing field where there are known rules, and the FCC will sit there as a referee and will throw the flag.

Somehow, former Chairman Wheeler would have us believe that an amorphous standard that means whatever the agency (or its Enforcement Bureau) says it means created a playing field with “known rules.” But claiming such broad authority is hardly the light-touch approach marketed to the public. Instead, this ill-conceived standard allows the FCC to wade as deeply as it chooses into how an ISP organizes its business and how it manages its network traffic.

Such an approach is destined to undermine, rather than further, the objectives of Internet openness, as embodied in Chairman Powell’s 2005 Internet Policy Statement:

To foster creation, adoption and use of Internet broadband content, applications, services and attachments, and to ensure consumers benefit from the innovation that comes from competition.

Instead, the Internet conduct standard is emblematic of how an off-the-rails quest to heavily regulate one specific component of the complex Internet ecosystem results in arbitrary regulatory imbalances — e.g., between ISPs and over-the-top (OTT) or edge providers that offer similar services such as video streaming or voice calling.

As Boston College Law Professor, Dan Lyons, puts it:

While many might assume that, in theory, what’s good for Netflix is good for consumers, the reality is more complex. To protect innovation at the edge of the Internet ecosystem, the Commission’s sweeping rules reduce the opportunity for consumer-friendly innovation elsewhere, namely by facilities-based broadband providers.

This is no recipe for innovation, nor does it coherently distinguish between practices that might impede competition and innovation on the Internet and those that are merely politically disfavored, for any reason or no reason at all.

Free data madness

The Internet conduct standard’s unholy combination of unfettered discretion and the impulse to micromanage can (and will) be deployed without credible justification to the detriment of consumers and innovation. Nowhere has this been more evident than in the confusion surrounding the regulation of “free data.”

Free data, like T-Mobile’s Binge On program, is data consumed by a user that has been subsidized by a mobile operator or a content provider. The vertical arrangements between operators and content providers creating the free data offerings provide many benefits to consumers, including enabling subscribers to consume more data (or, for low-income users, to consume data in the first place), facilitating product differentiation by mobile operators that offer a variety of free data plans (including allowing smaller operators the chance to get a leg up on competitors by assembling a market-share-winning plan), increasing the overall consumption of content, and reducing users’ cost of obtaining information. It’s also fundamentally about experimentation. As the International Center for Law & Economics (ICLE) recently explained:

Offering some services at subsidized or zero prices frees up resources (and, where applicable, data under a user’s data cap) enabling users to experiment with new, less-familiar alternatives. Where a user might not find it worthwhile to spend his marginal dollar on an unfamiliar or less-preferred service, differentiated pricing loosens the user’s budget constraint, and may make him more, not less, likely to use alternative services.

In December 2015 then-Chairman Tom Wheeler used his newfound discretion to launch a 13-month “inquiry” into free data practices before preliminarily finding some to be in violation of the standard. Without identifying any actual harm, Wheeler concluded that free data plans “may raise” economic and public policy issues that “may harm consumers and competition.”

After assuming the reins at the FCC, Chairman Pai swiftly put an end to that nonsense, saying that the Commission had better things to do (like removing barriers to broadband deployment) than denying free data plans that expand Internet access and are immensely popular, especially among low-income Americans.

The global morass of free data regulation

But as long as the Internet conduct standard remains on the books, it implicitly grants the US’s imprimatur to harmful policies and regulatory capriciousness in other countries that look to the US for persuasive authority. While Chairman Pai’s decisive intervention resolved the free data debate in the US (at least for now), other countries are still grappling with whether to prohibit the practice, allow it, or allow it with various restrictions.

In Europe, the 2016 EC guidelines left the decision of whether to allow the practice in the hands of national regulators. Consequently, some regulators — in Hungary, Sweden, and the Netherlands (although there the ban was recently overturned in court) — have banned free data practices  while others — in Denmark, Germany, Spain, Poland, the United Kingdom, and Ukraine — have not. And whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs, a state of affairs that is compounded by a lack of data on the consequences of various approaches to their regulation.

In Canada this year, the CRTC issued a decision adopting restrictive criteria under which to evaluate free data plans. The criteria include assessing the degree to which the treatment of data is agnostic, whether the free data offer is exclusive to certain customers or certain content providers, the impact on Internet openness and innovation, and whether there is financial compensation involved. The standard is open-ended, and free data plans as they are offered in the US would “likely raise concerns.”

Other regulators are contributing to the confusion through ambiguously framed rules, such as that of the Chilean regulator, Subtel. In a 2014 decision, it found that a free data offer of specific social network apps was in breach of Chile’s Internet rules. In contrast to what is commonly reported, however, Subtel did not ban free data. Instead, it required mobile operators to change how they promote such services, requiring them to state that access to Facebook, Twitter and WhatsApp were offered “without discounting the user’s balance” instead of “at no cost.” It also required them to disclose the amount of time the offer would be available, but imposed no mandatory limit.

In addition to this confusing regulatory make-work governing how operators market free data plans, the Chilean measures also require that mobile operators offer free data to subscribers who pay for a data plan, in order to ensure free data isn’t the only option users have to access the Internet.

The result is that in Chile today free data plans are widely offered by Movistar, Claro, and Entel and include access to apps such as Facebook, WhatsApp, Twitter, Instagram, Pokemon Go, Waze, Snapchat, Apple Music, Spotify, Netflix or YouTube — even though Subtel has nominally declared such plans to be in violation of Chile’s net neutrality rules.

Other regulators are searching for palatable alternatives to both flex their regulatory muscle to govern Internet access, while simultaneously making free data work. The Indian regulator, TRAI, famously banned free data in February 2016. But the story doesn’t end there. After seeing the potential value of free data in unserved and underserved, low-income areas, TRAI proposed implementing government-sanctioned free data. The proposed scheme would provide rural subscribers with 100 MB of free data per month, funded through the country’s universal service fund. To ensure that there would be no vertical agreements between content providers and mobile operators, TRAI recommended introducing third parties, referred to as “aggregators,” that would facilitate mobile-operator-agnostic arrangements.

The result is a nonsensical, if vaguely well-intentioned, threading of the needle between the perceived need to (over-)regulate access providers and the determination to expand access. Notwithstanding the Indian government’s awareness that free data will help to close the digital divide and enhance Internet access, in other words, it nonetheless banned private markets from employing private capital to achieve that very result, preferring instead non-market processes which are unlikely to be nearly as nimble or as effective — and yet still ultimately offer “non-neutral” options for consumers.

Thinking globally, acting locally (by ditching the Internet conduct standard)

Where it is permitted, free data is undergoing explosive adoption among mobile operators. Currently in the US, for example, all major mobile operators offer some form of free data or unlimited plan to subscribers. And, as a result, free data is proving itself as a business model for users’ early stage experimentation and adoption of augmented reality, virtual reality and other cutting-edge technologies that represent the Internet’s next wave — but that also use vast amounts of data. Were the US to cut off free data at the legs under the OIO absent hard evidence of harm, it would substantially undermine this innovation.

The application of the nebulous Internet conduct standard to free data is a microcosm of the current incoherence: It is a rule rife with a parade of uncertainties and only theoretical problems, needlessly saddling companies with enforcement risk, all in the name of preserving and promoting innovation and openness. As even some of the staunchest proponents of net neutrality have recognized, only companies that can afford years of litigation can be expected to thrive in such an environment.

In the face of confusion and uncertainty globally, the US is now poised to provide leadership grounded in sound policy that promotes innovation. As ICLE noted last month, Chairman Pai took a crucial step toward re-imposing economic rigor and the rule of law at the FCC by questioning the unprecedented and ill-supported expansion of FCC authority that undergirds the OIO in general and the Internet conduct standard in particular. Today the agency will take the next step by voting on Chairman Pai’s proposed rulemaking. Wherever the new proceeding leads, it’s a welcome opportunity to analyze the issues with a degree of rigor that has thus far been appallingly absent.

And we should not forget that there’s a direct solution to these ambiguities that would avoid the undulations of subsequent FCC policy fights: Congress could (and should) pass legislation implementing a regulatory framework grounded in sound economics and empirical evidence that allows for consumers to benefit from the vast number of procompetitive vertical agreements (such as free data plans), while still facilitating a means for policing conduct that may actually harm consumers.

The Golden State Warriors are the heavy odds-on favorite to win another NBA Championship this summer, led by former OKC player Kevin Durant. And James Harden is a contender for league MVP. We can’t always turn back the clock on a terrible decision, hastily made before enough evidence has been gathered, but Chairman Pai’s efforts present a rare opportunity to do so.

Today the International Center for Law & Economics (ICLE) Antitrust and Consumer Protection Research Program released a new white paper by Geoffrey A. Manne and Allen Gibby entitled:

A Brief Assessment of the Procompetitive Effects of Organizational Restructuring in the Ag-Biotech Industry

Over the past two decades, rapid technological innovation has transformed the industrial organization of the ag-biotech industry. These developments have contributed to an impressive increase in crop yields, a dramatic reduction in chemical pesticide use, and a substantial increase in farm profitability.

One of the most striking characteristics of this organizational shift has been a steady increase in consolidation. The recent announcements of mergers between Dow and DuPont, ChemChina and Syngenta, and Bayer and Monsanto suggest that these trends are continuing in response to new market conditions and a marked uptick in scientific and technological advances.

Regulators and industry watchers are often concerned that increased consolidation will lead to reduced innovation, and a greater incentive and ability for the largest firms to foreclose competition and raise prices. But ICLE’s examination of the underlying competitive dynamics in the ag-biotech industry suggests that such concerns are likely unfounded.

In fact, R&D spending within the seeds and traits industry increased nearly 773% between 1995 and 2015 (from roughly $507 million to $4.4 billion), while the combined market share of the six largest companies in the segment increased by more than 550% (from about 10% to over 65%) during the same period.

Firms today are consolidating in order to innovate and remain competitive in an industry replete with new entrants and rapidly evolving technological and scientific developments.

According to ICLE’s analysis, critics have unduly focused on the potential harms from increased integration, without properly accounting for the potential procompetitive effects. Our brief white paper highlights these benefits and suggests that a more nuanced and restrained approach to enforcement is warranted.

Our analysis suggests that, as in past periods of consolidation, the industry is well positioned to see an increase in innovation as these new firms unite complementary expertise to pursue more efficient and effective research and development. They should also be better able to help finance, integrate, and coordinate development of the latest scientific and technological developments — particularly in rapidly growing, data-driven “digital farming” —  throughout the industry.

Download the paper here.

And for more on the topic, revisit TOTM’s recent blog symposium, “Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries,” here.

According to Cory Doctorow over at Boing Boing, Tim Wu has written an open letter to W3C Chairman Sir Timothy Berners-Lee, expressing concern about a proposal to include Encrypted Media Extensions (EME) as part of the W3C standards. W3C has a helpful description of EME:

Encrypted Media Extensions (EME) is currently a draft specification… [for] an Application Programming Interface (API) that enables Web applications to interact with content protection systems to allow playback of encrypted audio and video on the Web. The EME specification enables communication between Web browsers and digital rights management (DRM) agent software to allow HTML5 video play back of DRM-wrapped content such as streaming video services without third-party media plugins. This specification does not create nor impose a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems.

Wu’s letter expresses his concern about hardwiring DRM into the technical standards supporting an open internet. He writes:

I wanted to write to you and respectfully ask you to seriously consider extending a protective covenant to legitimate circumventers who have cause to bypass EME, should it emerge as a W3C standard.

Wu asserts that this “protective covenant” is needed because, without it, EME will confer too much power on internet “chokepoints”:

The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected…. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know… It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.”

But conflating the Microsoft case with a relatively simple browser feature meant to enable all content providers to use any third-party DRM to secure their content — in other words, to enhance interoperability — is beyond the pale. If we take the Microsoft case as Wu would like, it was about one firm controlling, far and away, the largest share of desktop computing installations, a position that Wu and his fellow travelers believed gave Microsoft an unreasonable leg up in forcing usage of Internet Explorer to the exclusion of Netscape. With EME, the W3C is not maneuvering the standard so that a single DRM provider comes to protect all content on the web, or could even hope to do so. EME enables content distributors to stream content through browsers using their own DRM backend. There is simply nothing in that standard that enables a firm to dominate content distribution or control huge swaths of the Internet to the exclusion of competitors.

Unless, of course, you just don’t like DRM and you think that any technology that enables content producers to impose restrictions on consumption of media creates a “chokepoint.” But, again, this position is borderline nonsense. Such a “chokepoint” is no more restrictive than just going to Netflix’s app (or Hulu’s, or HBO’s, or Xfinity’s, or…) and relying on its technology. And while it is no more onerous than visiting Netflix’s app, it creates greater security on the open web such that copyright owners don’t need to resort to proprietary technologies and apps for distribution. And, more fundamentally, Wu’s position ignores the role that access and usage controls are playing in creating online markets through diversified product offerings

Wu appears to believe, or would have his readers believe, that W3C is considering the adoption of a mandatory standard that would modify core aspects of the network architecture, and that therefore presents novel challenges to the operation of the internet. But this is wrong in two key respects:

  1. Except in the extremely limited manner as described below by the W3C, the EME extension does not contain mandates, and is designed only to simplify the user experience in accessing content that would otherwise require plug-ins; and
  2. These extensions are already incorporated into the major browsers. And of course, most importantly for present purposes, the standard in no way defines or harmonizes the use of DRM.

The W3C has clearly and succinctly explained the operation of the proposed extension:

The W3C is not creating DRM policies and it is not requiring that HTML use DRM. Organizations choose whether or not to have DRM on their content. The EME API can facilitate communication between browsers and DRM providers but the only mandate is not DRM but a form of key encryption (Clear Key). EME allows a method of playback of encrypted content on the Web but W3C does not make the DRM technology nor require it. EME is an extension. It is not required for HTML nor HMTL5 video.

Like many internet commentators, Tim Wu fundamentally doesn’t like DRM, and his position here would appear to reflect his aversion to DRM rather than a response to the specific issues before the W3C. Interestingly, in arguing against DRM nearly a decade ago, Wu wrote:

Finally, a successful locking strategy also requires intense cooperation between many actors – if you protect a song with “superlock,” and my CD player doesn’t understand that, you’ve just created a dead product. (Emphasis added)

In other words, he understood the need for agreements in vertical distribution chains in order to properly implement protection schemes — integration that he opposes here (not to suggest that he supported them then, but only to highlight the disconnect between recognizing the need for coordination and simultaneously trying to prevent it).

Vint Cerf (himself no great fan of DRM — see here, for example) has offered a number of thoughtful responses to those, like Wu, who have objected to the proposed standard. Cerf writes on the ISOC listserv:

EMEi is plainly very general. It can be used to limit access to virtually any digital content, regardless of IPR status. But, in some sense, anyone wishing to restrict access to some service/content is free to do so (there are other means such as login access control, end/end encryption such as TLS or IPSEC or QUIC). EME is yet another method for doing that. Just because some content is public domain does not mean that every use of it must be unprotected, does it?

And later in the thread he writes:

Just because something is public domain does not mean someone can’t lock it up. Presumably there will be other sources that are not locked. I can lock up my copy of Gulliver’s Travels and deny you access except by some payment, but if it is public domain someone else may have a copy you can get. In any case, you can’t deny others the use of the content IF THEY HAVE IT. You don’t have to share your copy of public domain with anyone if you don’t want to.

Just so. It’s pretty hard to see the competition problems that could arise from facilitating more content providers making content available on the open web.

In short, Wu wants the W3C to develop limitations on rules when there are no relevant rules to modify. His dislike of DRM obscures his vision of the limited nature of the EME proposal which would largely track, rather than lead, the actions already being undertaken by the principal commercial actors on the internet, and which merely creates a structure for facilitating voluntary commercial transactions in ways that enhance the user experience.

The W3C process will not, as Wu intimates, introduce some pernicious, default protection system that would inadvertently lock down content; rather, it would encourage the development of digital markets on the open net rather than (or in addition to) through the proprietary, vertical markets where they are increasingly found today. Wu obscures reality rather than illuminating it through his poorly considered suggestion that EME will somehow lead to a new set of defaults that threaten core freedoms.

Finally, we can’t help but comment on Wu’s observation that

My larger point is that I think the history of the anti-circumvention laws suggests is (sic) hard to predict how [freedom would be affected]– no one quite predicted the inkjet market would be affected. But given the power of those laws, the potential for anti-competitive consequences certainly exists.

Let’s put aside the fact that W3C is not debating the laws surrounding circumvention, nor, as noted, developing usage rules. It remains troubling that Wu’s belief there are sometimes unintended consequences of actions (and therefore a potential for harm) would be sufficient to lead him to oppose a change to the status quo — as if any future, potential risk necessarily outweighs present, known harms. This is the Precautionary Principle on steroids. The EME proposal grew out of a desire to address impediments that prevent the viability and growth of online markets that sufficiently ameliorate the non-hypothetical harms of unauthorized uses. The EME proposal is a modest step towards addressing a known universe. A small step, but something to celebrate, not bemoan.

On Thursday, March 30, Friday March 31, and Monday April 3, Truth on the Market and the International Center for Law and Economics presented a blog symposium — Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries — discussing three proposed agricultural/biotech industry mergers awaiting judgment by antitrust authorities around the globe. These proposed mergers — Bayer/Monsanto, Dow/DuPont and ChemChina/Syngenta — present a host of fascinating issues, many of which go to the core of merger enforcement in innovative industries — and antitrust law and economics more broadly.

The big issue for the symposium participants was innovation (as it was for the European Commission, which cleared the Dow/DuPont merger last week, subject to conditions, one of which related to the firms’ R&D activities).

Critics of the mergers, as currently proposed, asserted that the increased concentration arising from the “Big 6” Ag-biotech firms consolidating into the Big 4 could reduce innovation competition by (1) eliminating parallel paths of research and development (Moss); (2) creating highly integrated technology/traits/seeds/chemicals platforms that erect barriers to new entry platforms (Moss); (3) exploiting eventual network effects that may result from the shift towards data-driven agriculture to block new entry in input markets (Lianos); or (4) increasing incentives to refuse to license, impose discriminatory restrictions in technology licensing agreements, or tacitly “agree” not to compete (Moss).

Rather than fixating on horizontal market share, proponents of the mergers argued that innovative industries are often marked by disruptions and that investment in innovation is an important signal of competition (Manne). An evaluation of the overall level of innovation should include not only the additional economies of scale and scope of the merged firms, but also advancements made by more nimble, less risk-averse biotech companies and smaller firms, whose innovations the larger firms can incentivize through licensing or M&A (Shepherd). In fact, increased efficiency created by economies of scale and scope can make funds available to source innovation outside of the large firms (Shepherd).

In addition, innovation analysis must also account for the intricately interwoven nature of agricultural technology across seeds and traits, crop protection, and, now, digital farming (Sykuta). Combined product portfolios generate more data to analyze, resulting in increased data-driven value for farmers and more efficiently targeted R&D resources (Sykuta).

While critics voiced concerns over such platforms erecting barriers to entry, markets are contestable to the extent that incumbents are incentivized to compete (Russell). It is worth noting that certain industries with high barriers to entry or exit, significant sunk costs, and significant costs disadvantages for new entrants (including automobiles, wireless service, and cable networks) have seen their prices decrease substantially relative to inflation over the last 20 years — even as concentration has increased (Russell). Not coincidentally, product innovation in these industries, as in ag-biotech, has been high.

Ultimately, assessing the likely effects of each merger using static measures of market structure is arguably unreliable or irrelevant in dynamic markets with high levels of innovation (Manne).

Regarding patents, critics were skeptical that combining the patent portfolios of the merging companies would offer benefits beyond those arising from cross-licensing, and would serve to raise rivals’ costs (Ghosh). While this may be true in some cases, IP rights are probabilistic, especially in dynamic markets, as Nicolas Petit noted:

There is no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change.

In spite of these uncertainties, deals such as the pending ag-biotech mergers provide managers the opportunity to evaluate and reorganize assets to maximize innovation and return on investment in such a way that would not be possible absent a merger (Sykuta). Neither party would fully place its IP and innovation pipeline on the table otherwise.

For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.

We’d like to thank all of the participants for their excellent contributions!

In a recent long-form article in the New York Times, reporter Noam Scheiber set out to detail some of the ways Uber (and similar companies, but mainly Uber) are engaged in “an extraordinary experiment in behavioral science to subtly entice an independent work force to maximize its growth.”

That characterization seems innocuous enough, but it is apparent early on that Scheiber’s aim is not only to inform but also, if not primarily, to deride these efforts. The title of the piece, in fact, sets the tone:

How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons

Uber and its relationship with its drivers are variously described by Scheiber in the piece as secretive, coercive, manipulative, dominating, and exploitative, among other things. As Schreiber describes his article, it sets out to reveal how

even as Uber talks up its determination to treat drivers more humanely, it is engaged in an extraordinary behind-the-scenes experiment in behavioral science to manipulate them in the service of its corporate growth — an effort whose dimensions became evident in interviews with several dozen current and former Uber officials, drivers and social scientists, as well as a review of behavioral research.

What’s so galling about the piece is that, if you strip away the biased and frequently misguided framing, it presents a truly engaging picture of some of the ways that Uber sets about solving a massively complex optimization problem, abetted by significant agency costs.

So I did. Strip away the detritus, add essential (but omitted) context, and edit the article to fix the anti-Uber bias, the one-sided presentation, the mischaracterizations, and the fundamentally non-economic presentation of what is, at its core, a fascinating illustration of some basic problems (and solutions) from industrial organization economics. (For what it’s worth, Scheiber should know better. After all, “He holds a master’s degree in economics from the University of Oxford, where he was a Rhodes Scholar, and undergraduate degrees in math and economics from Tulane University.”)

In my retelling, the title becomes:

How Uber Uses Innovative Management Tactics to Incentivize Its Drivers

My transformed version of the piece, with critical commentary in the form of tracked changes to the original, is here (pdf).

It’s a long (and, as I said, fundamentally interesting) piece, with cool interactive graphics, well worth the read (well, at least in my retelling, IMHO). Below is just a taste of the edits and commentary I added.

For example, where Scheiber writes:

Uber exists in a kind of legal and ethical purgatory, however. Because its drivers are independent contractors, they lack most of the protections associated with employment. By mastering their workers’ mental circuitry, Uber and the like may be taking the economy back toward a pre-New Deal era when businesses had enormous power over workers and few checks on their ability to exploit it.

With my commentary (here integrated into final form rather than tracked), that paragraph becomes:

Uber operates under a different set of legal constraints, however, also duly enacted and under which millions of workers have profitably worked for decades. Because its drivers are independent contractors, they receive their compensation largely in dollars rather than government-mandated “benefits” that remove some of the voluntariness from employer/worker relationships. And, in the case of overtime pay, for example, the Uber business model that is built in part on offering flexible incentives to match supply and demand using prices and compensation, would be next to impossible. It is precisely through appealing to drivers’ self-interest that Uber and the like may be moving the economy forward to a new era when businesses and workers have more flexibility, much to the benefit of all.

Elsewhere, Scheiber’s bias is a bit more subtle, but no less real. Thus, he writes:

As he tried to log off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted.

With my edits and commentary, that paragraph becomes:

As he started the process of logging off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted, but the former was listed first. It’s anyone’s guess whether either characteristic — placement or coloring — had any effect on drivers’ likelihood of clicking one button or the other.

And one last example. Scheiber writes:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, there is another way to think of the logic of forward dispatch: It overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

This pre-emptive hard-wiring can have a huge influence on behavior, said David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably, as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be.

Here’s how I would recast that, and add some much-needed economics:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies — by giving them more income-earning opportunities.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, and seems like another win-win, some critics have tried to paint even this means of satisfying both driver and consumer preferences in a negative light by claiming that the forward dispatch algorithm overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

Tweaks like these put paid to the arguments that Uber is simply trying to abuse its drivers. And yet, critics continue to make such claims:

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

It’s difficult to take seriously claims that Uber “abuses” drivers by setting a default that drivers almost certainly prefer; surely drivers seek out another fare following the last fare more often than they seek out another bathroom break. In any case, the difference between one default and the other is a small change in the number of times drivers might have to push a single button; hardly a huge impediment.

But such claims persist, nevertheless. Setting a trivially different default can have a huge influence on behavior, claims David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably — and to change the subject — as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be. But there are any number of defenses of this practice, from both a driver- and consumer-welfare standpoint. Not least, such disclosure could well create isolated scarcity for a huge range of individual ride requests (as opposed to the general scarcity during a “surge”), leading to longer wait times, the need to adjust prices for consumers on the basis of individual rides, and more intense competition among drivers for the most profitable rides. Given these and other explanations, it is extremely unlikely that the practice is actually aimed at “abusing” drivers.

As they say, read the whole thing!

John E. Lopatka is A. Robert Noll Distinguished Professor of Law at Penn State Law School

People need to eat. All else equal, the more food that can be produced from an acre of land, the better off they’ll be. Of course, people want to pay as little as possible for their food to boot. At heart, the antitrust analysis of the pending agribusiness mergers requires a simple assessment of their effects on food production and price. But making that assessment raises difficult questions about institutional competence.

Each of the three mergers – Dow/DuPont, ChemChina/Syngenta, and Bayer/Monsanto – involves agricultural products, such as different kinds of seeds, pesticides, and fertilizers. All of these products are inputs in the production of food – the better and cheaper are these products, the more food is produced. The array of products these firms produce invites potentially controversial market definition determinations, but these determinations are standard fare in antitrust law and economics, and conventional analysis handles them tolerably well. Each merger appears to pose overlaps in some product markets, though they seem to be relatively small parts of the firms’ businesses. Traditional merger analysis would examine these markets in properly defined geographic markets, some of which are likely international. The concern in these markets seems to be coordinated interaction, and the analysis of potential anticompetitive coordination would thus focus on concentration and entry barriers. Much could be said about the assumption that product markets perform less competitively as concentration increases, but that is an issue for others or at least another day.

More importantly for my purposes here, to the extent that any of these mergers creates concentration in a market that is competitively problematic and not likely to be cured by new entry, a fix is fairly easy. These are mergers in which asset divestiture is feasible, in which the parties seem willing to divest assets, and in which interested and qualified asset buyers are emerging. To be sure, firms may be willing to divest assets at substantial cost to appease regulators even when competitive problems are illusory, and the cost of a cure in search of an illness is a real social cost. But my concern lies elsewhere.

The parties in each of these mergers have touted innovation as a beneficial byproduct of the deal if not its raison d’être. Innovation effects have made their way into merger analysis, but not smoothly. Innovation can be a kind of efficiency, distinguished from most other efficiencies by its dynamic nature. The benefits of using a plant to its capacity are immediate: costs and prices decrease now. Any benefits of innovation will necessarily be experienced in the future, and the passage of time makes benefits both less certain and less valuable, as people prefer consumption now rather than later. The parties to these mergers in their public statements, to the extent they intend to address antitrust concerns, are implicitly asserting innovation as a defense, a kind of efficiency defense. They do not concede, of course, that their deals will be anticompetitive in any product market. But for antitrust purposes, an accelerated pace of innovation is irrelevant unless the merger appears to threaten competition.

Recognizing increased innovation as a merger defense raises all of the issues that any efficiencies defense raises, and then some. First, can efficiencies be identified?  For instance, patent portfolios can be combined, and the integration of patent rights can lower transaction costs relative to a contractual allocation of rights just as any integration can. In theory, avenues of productive research may not even be recognized until the firms’ intellectual property is combined. A merger may eliminate redundant research efforts, but identifying that which is truly duplicative is often not easy. In all, identifying efficiencies related to research and development is likely to be more difficult than identifying many other kinds of efficiencies. Second, are the efficiencies merger-specific?  The less clearly research and development efficiencies can be identified, the weaker is the claim that they cannot be achieved absent the merger. But in this respect, innovation efficiencies can be more important than most other kinds of efficiencies, because intellectual property sometimes cannot be duplicated as easily as physical property can. Third, can innovation efficiencies be quantified?  If innovation is expected to take the form of an entirely new product, such as a new pesticide, estimating its value is inherently speculative. Fourth, when will efficiencies save a merger that would otherwise be condemned?  An efficiencies defense implies a comparison between the expected harm a merger will cause and the expected benefits it will produce. Arguably those benefits have to be realized by consumers to count at all, but, in any event, a comparison between expected immediate losses of customers in an input market and expected future gains from innovation may be nearly impossible to make. The Merger Guidelines acknowledge that innovation efficiencies can be considered and note many of the concerns just listed. The takeaway is a healthy skepticism of an innovation defense. The defense should generally fail unless the model of anticompetitive harm in product (or service) markets is dubious or the efficiency claim is unusually specific and the likely benefits substantial.

Innovation can enter merger analysis in an even more troublesome way, however: as a club rather than a shield. The Merger Guidelines contemplate that a merger may have unilateral anticompetitive effects if it results in a “reduced incentive to continue with an existing product-development effort or reduced incentive to initiate development of new products.”  The stark case is one in which a merger poses no competitive problem in a product market but would allegedly reduce innovation competition. The best evidence that the elimination of innovation competition might be a reason to oppose one or more of the agribusiness mergers is the recent decision of the European Commission approving the Dow/DuPont merger, subject to various asset divestitures. The Commission, echoing the Guidelines, concluded that the merger would significantly reduce “innovation competition for pesticides” by “[r]emoving the parties’ incentives to continue to pursue ongoing parallel innovation efforts” and by “[r]emoving the parties’ incentives to develop and bring to market new pesticides.”  The agreed upon fix requires DuPont to divest most of its research and development organization.

Enforcement claims that a merger will restrict innovation competition should be met with every bit the skepticism due defense claims that innovation efficiencies save a merger. There is nothing inconsistent in this symmetry. The benefits of innovation, though potentially immense – large enough to dwarf the immediate allocative harm from a lessening of competition in product markets – is speculative. In discounted utility terms, the expected harm will usually exceed the expected benefits, given our limited ability to predict the future. But the potential gains from innovation are immense, and unless we are confident that a merger will reduce innovation, antitrust law should not intervene. We rarely are, at least we rarely should be.

As Geoffrey Manne points out, we still do not know a great deal about the optimal market structure for innovation. Evidence suggests that moderate concentration is most conducive to innovation, but it is not overwhelming, and more importantly no one is suggesting a merger policy that single-mindedly pursues a particular market structure. An examination of incentives to continue existing product development projects or to initiate projects to develop new products is superficially appealing, but its practical utility is elusive. Any firm has an incentive to develop products that increase demand. The Merger Guidelines suggest that a merger will reduce incentives to innovate if the introduction of a new product by one merging firm will capture substantial revenues from the other. The E.C. likely had this effect in mind in concluding that the merged entity would have “lower incentives . . . to innovate than Dow and DuPont separately.”  The Commission also observed that the merged firm would have “a lower ability to innovate” than the two firms separately, but just how a combination of research assets could reduce capability is utterly obscure.

In any event, whether a merger reduces incentives depends not only on the welfare of the merging parties but also on the development activities of actual and would-be competitors. A merged firm cannot afford to have its revenue captured by a new product introduced by a competitor. Of course, innovation by competitors will not spur a firm to develop new products if those competitors do not have the resources needed to innovate. One can imagine circumstances in which resources necessary to innovate in a product market are highly specialized; more realistically, the lack of specialized resources will decrease the pace of innovation. But the concept of specialized resources cannot mean resources a firm has developed that are conducive to innovate and that could be, but have not yet been, developed by other firms. It cannot simply mean a head start, unless it is very long indeed. If the first two firms in an industry build a plant, the fact that a new entrant would have to build a plant is not a sufficient reason to prevent the first two from merging. In any event, what resources are essential to innovation in an area can be difficult to determine.

Assuming essential resources can be identified, how many firms need to have them to create a competitive environment? The Guidelines place the number at “very small” plus one. Elsewhere, the federal antitrust agencies suggest that four firms other than the merged firm are sufficient to maintain innovation competition. We have models, whatever their limitations, that predict price effects in oligopolies. The Guidelines are based on them. But determining the number of firms necessary for competitive innovation is another matter. Maybe two is enough. We know for sure that innovation competition is non-existent if only one firm has the capacity to innovate, but not much else. We know that duplicative research efforts can be wasteful. If two firms would each spend $1 million to arrive at the same place, a merged firm might be able to invest $2 million and go twice as far or reach the first place at half the total cost. This is only to say that a merger can increase innovation efficiency, a possibility that is not likely to justify an otherwise anticompetitive merger but should usually protect from condemnation a merger that is not otherwise anticompetitive.

In the Dow/DuPont merger, the Commission found “specific evidence that the merged entity would have cut back on the amount they spent on developing innovative products.”  Executives of the two firms stated that they expected to reduce research and development spending by around $300 million. But a reduction in spending does not tell us whether innovation will suffer. The issue is innovation efficiency. If the two firms spent, say, $1 billion each on research, $300 million of which was duplicative of the other firm’s research, the merged firm could invest $1.7 billion without reducing productive effort. The Commission complained that the merger would reduce from five to four the number of firms that are “globally active throughout the entire R&D process.”  As noted above, maybe four firms competing are enough. We don’t know. But the Commission also discounts firms with “more limited R&D capabilities,” and the importance to successful innovation of multi-level integration in this industry is not clear.

When a merger is challenged because of an adverse effect on innovation competition, a fix can be difficult. Forced licensing might work, but that assumes that the relevant resource necessary to carry on research and development is intellectual property. More may be required. If tangible assets related to research and development are required, a divestiture might cripple the merged firm. The Commission remedy was to require the merged firm to divest “DuPont’s global R&D organization” that is related to the product operations that must be divested. The firm is permitted to retain “a few limited [R&D] assets that support the part of DuPont’s pesticide business” that is not being divested. In this case, such a divestiture may or may not hobble the merged firm, depending on whether the divested assets would have contributed to the research and development efforts that it will continue to pursue. That the merged firm was willing to accept the research and development divestiture to secure Commission approval does not mean that the divestiture will do no harm to the firm’s continuing research and development activities. Moreover, some product markets at issue in this merger are geographically limited, whereas the likely benefits of innovation are largely international. The implication is that increased concentration in product markets can be avoided by divesting assets to other large agribusinesses that do not operate in the relevant geographic market. But if the Commission insists on preserving five integrated firms active in global research and development activities, DuPont’s research and development activities cannot be divested to one of the other major players, which the Commission identifies as BASF, Bayer, and Syngenta, or firms with which any of them are attempting to merge, namely Monsanto and ChemChina. These are the five firms, of course, that are particularly likely to be interested buyers.

Innovation is important. No one disagrees. But the role of competition in stimulating innovation is not well understood. Except in unusual cases, antitrust institutions are ill-equipped either to recognize innovation efficiencies that save a merger threatening competition in product markets or to condemn mergers that threaten only innovation competition. Indeed, despite maintaining their prerogative to challenge mergers solely on the ground of a reduction in innovation competition, the federal agencies have in fact complained about an adverse effect on innovation in cases that also raise competitive issues in product markets. Innovation is at the heart of the pending agribusiness mergers. How regulators and courts analyze innovation in these cases will say something about whether they perceive their limitations.