Archives For vertical restraints

I remain deeply skeptical of any antitrust challenge to the AT&T/Time Warner merger.  Vertical mergers like this one between a content producer and a distributor are usually efficiency-enhancing.  The theories of anticompetitive harm here rely on a number of implausible assumptions — e.g., that the combined company would raise content prices (currently set at profit-maximizing levels so that any price increase would reduce profits on content) in order to impair rivals in the distribution market and enhance profits there.  So I’m troubled that DOJ seems poised to challenge the merger.

I am, however, heartened — I think — by a speech Assistant Attorney General Makan Delrahim recently delivered at the ABA’s Antitrust Fall Forum. The crux of the speech, which is worth reading in its entirety, was that behavioral remedies — effectively having the government regulate a merged company’s day-to-day business decisions — are almost always inappropriate in merger challenges.

That used to be DOJ’s official position.  The Antitrust Division’s 2004 Remedies Guide proclaimed that “[s]tructural remedies are preferred to conduct remedies in merger cases because they are relatively clean and certain, and generally avoid costly government entanglement in the market.”

During the Obama administration, DOJ changed its tune.  Its 2011 Remedies Guide removed the statement quoted above as well as an assertion that behavioral remedies would be appropriate only in limited circumstances.  The 2011 Guide instead remained neutral on the choice between structural and conduct remedies, explaining that “[i]n certain factual circumstances, structural relief may be the best choice to preserve competition.  In a different set of circumstances, behavioral relief may be the best choice.”  The 2011 Guide also deleted the older Guide’s discussion of the limitations of conduct remedies.

Not surprisingly in light of the altered guidance, several of the Obama DOJ’s merger challenges—Ticketmaster/Live Nation, Comcast/NBC Universal, and Google/ITA Software, for example—resulted in settlements involving detailed and significant regulation of the combined firm’s conduct.  The settlements included mandatory licensing requirements, price regulation, compulsory arbitration of pricing disputes with recipients of mandated licenses, obligations to continue to develop and support certain products, the establishment of informational firewalls between divisions of the merged companies, prohibitions on price and service discrimination among customers, and various reporting requirements.

Settlements of such sort move antitrust a long way from the state of affairs described by then-professor Stephen Breyer, who wrote in his classic book Regulation and Its Reform:

[I]n principle the antitrust laws differ from classical regulation both in their aims and in their methods.  The antitrust laws seek to create or maintain the conditions of a competitive marketplace rather than replicate the results of competition or correct for the defects of competitive markets.  In doing so, they act negatively, through a few highly general provisions prohibiting certain forms of private conduct.  They do not affirmatively order firms to behave in specified ways; for the most part, they tell private firms what not to do . . . .  Only rarely do the antitrust enforcement agencies create the detailed web of affirmative legal obligations that characterizes classical regulation.

I am pleased to see Delrahim signaling a move away from behavioral remedies.  As Alden Abbott and I explained in our article, Recognizing the Limits of Antitrust: The Roberts Court Versus the Enforcement Agencies,

[C]onduct remedies present at least four difficulties from a limits of antitrust perspective.  First, they may thwart procompetitive conduct by the regulated firm.  When it comes to regulating how a firm interacts with its customers and rivals, it is extremely difficult to craft rules that will ban the bad without also precluding the good.  For example, requiring a merged firm to charge all customers the same price, a commonly imposed conduct remedy, may make it hard for the firm to serve clients who impose higher costs and may thwart price discrimination that actually enhances overall market output.  Second, conduct remedies entail significant direct implementation costs.  They divert enforcers’ attention away from ferreting out anticompetitive conduct elsewhere in the economy and require managers of regulated firms to focus on appeasing regulators rather than on meeting their customers’ desires.  Third, conduct remedies tend to grow stale.  Because competitive conditions are constantly changing, a conduct remedy that seems sensible when initially crafted may soon turn out to preclude beneficial business behavior.  Finally, by transforming antitrust enforcers into regulatory agencies, conduct remedies invite wasteful lobbying and, ultimately, destructive agency capture.

The first three of these difficulties are really aspects of F.A. Hayek’s famous knowledge problem.  I was thus particularly heartened by this part of Delrahim’s speech:

The economic liberty approach to industrial organization is also good economic policy.  F. A. Hayek won the 1974 Nobel Prize in economics for his work on the problems of central planning and the benefits of a decentralized free market system.  The price system of the free market, he explained, operates as a mechanism for communicating disaggregated information.  “[T]he ultimate decisions must be left to the people who are familiar with the[] circumstances.”  Regulation, I humbly submit in contrast, involves an arbiter unfamiliar with the circumstances that cannot possibly account for the wealth of information and dynamism that the free market incorporates.

So why the reservation in my enthusiasm?  Because eschewing conduct remedies may result in barring procompetitive mergers that might have been allowed with behavioral restraints.  If antitrust enforcers are going to avoid conduct remedies on Hayekian and Public Choice grounds, then they should challenge a merger only if they are pretty darn sure it presents a substantial threat to competition.

Delrahim appears to understand the high stakes of a “no behavioral remedies” approach to merger review:  “To be crystal clear, [having a strong presumption against conduct remedies] cuts both ways—if a merger is illegal, we should only accept a clean and complete solution, but if the merger is legal we should not impose behavioral conditions just because we can do so to expand our power and because the merging parties are willing to agree to get their merger through.”

The big question is whether the Trump DOJ will refrain from challenging mergers that do not pose a clear and significant threat to competition and consumer welfare.  On that matter, the jury is out.

Last week the editorial board of the Washington Post penned an excellent editorial responding to the European Commission’s announcement of its decision in its Google Shopping investigation. Here’s the key language from the editorial:

Whether the demise of any of [the complaining comparison shopping sites] is specifically traceable to Google, however, is not so clear. Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies. Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites…. Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

That’s actually a pretty thorough, if succinct, summary of the basic problems with the Commission’s case (based on its PR and Factsheet, at least; it hasn’t released the full decision yet).

I’ll have more to say on the decision in due course, but for now I want to elaborate on two of the points raised by the WaPo editorial board, both in service of its crucial rejoinder to the Commission that “Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies.”

First, the WaPo editorial board points out that:

Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites.

It is undoubtedly true that users “may well prefer to see a Google-generated list of vendors first.” It’s also crucial to understanding the changes in Google’s search results page that have given rise to the current raft of complaints.

As I noted in a Wall Street Journal op-ed two years ago:

It’s a mistake to consider “general search” and “comparison shopping” or “product search” to be distinct markets.

From the moment it was technologically feasible to do so, Google has been adapting its traditional search results—that familiar but long since vanished page of 10 blue links—to offer more specialized answers to users’ queries. Product search, which is what is at issue in the EU complaint, is the next iteration in this trend.

Internet users today seek information from myriad sources: Informational sites (Wikipedia and the Internet Movie Database); review sites (Yelp and TripAdvisor); retail sites (Amazon and eBay); and social-media sites (Facebook and Twitter). What do these sites have in common? They prioritize certain types of data over others to improve the relevance of the information they provide.

“Prioritization” of Google’s own shopping results, however, is the core problem for the Commission:

Google has systematically given prominent placement to its own comparison shopping service: when a consumer enters a query into the Google search engine in relation to which Google’s comparison shopping service wants to show results, these are displayed at or near the top of the search results. (Emphasis in original).

But this sort of prioritization is the norm for all search, social media, e-commerce and similar platforms. And this shouldn’t be a surprise: The value of these platforms to the user is dependent upon their ability to sort the wheat from the chaff of the now immense amount of information coursing about the Web.

As my colleagues and I noted in a paper responding to a methodologically questionable report by Tim Wu and Yelp leveling analogous “search bias” charges in the context of local search results:

Google is a vertically integrated company that offers general search, but also a host of other products…. With its well-developed algorithm and wide range of products, it is hardly surprising that Google can provide not only direct answers to factual questions, but also a wide range of its own products and services that meet users’ needs. If consumers choose Google not randomly, but precisely because they seek to take advantage of the direct answers and other options that Google can provide, then removing the sort of “bias” alleged by [complainants] would affirmatively hurt, not help, these users. (Emphasis added).

And as Josh Wright noted in an earlier paper responding to yet another set of such “search bias” charges (in that case leveled in a similarly methodologically questionable report by Benjamin Edelman and Benjamin Lockwood):

[I]t is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather individual competitors and websites. Edelman & Lockwood´s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content. However, it is not useful from an antitrust policy perspective because it erroneously—and contrary to economic theory and evidence—presumes natural and procompetitive product differentiation in search rankings to be inherently harmful. (Emphasis added).

We’ll have to see what kind of analysis the Commission relies upon in its decision to reach its conclusion that prioritization is an antitrust problem, but there is reason to be skeptical that it will turn out to be compelling. The Commission states in its PR that:

The evidence shows that consumers click far more often on results that are more visible, i.e. the results appearing higher up in Google’s search results. Even on a desktop, the ten highest-ranking generic search results on page 1 together generally receive approximately 95% of all clicks on generic search results (with the top result receiving about 35% of all the clicks). The first result on page 2 of Google’s generic search results receives only about 1% of all clicks. This cannot just be explained by the fact that the first result is more relevant, because evidence also shows that moving the first result to the third rank leads to a reduction in the number of clicks by about 50%. The effects on mobile devices are even more pronounced given the much smaller screen size.

This means that by giving prominent placement only to its own comparison shopping service and by demoting competitors, Google has given its own comparison shopping service a significant advantage compared to rivals. (Emphasis added).

Whatever truth there is in the characterization that placement is more important than relevance in influencing user behavior, the evidence cited by the Commission to demonstrate that doesn’t seem applicable to what’s happening on Google’s search results page now.

Most crucially, the evidence offered by the Commission refers only to how placement affects clicks on “generic search results” and glosses over the fact that the “prominent placement” of Google’s “results” is not only a difference in position but also in the type of result offered.

Google Shopping results (like many of its other “vertical results” and direct answers) are very different than the 10 blue links of old. These “universal search” results are, for one thing, actual answers rather than merely links to other sites. They are also more visually rich and attractively and clearly displayed.

Ironically, Tim Wu and Yelp use the claim that users click less often on Google’s universal search results to support their contention that increased relevance doesn’t explain Google’s prioritization of its own content. Yet, as we note in our response to their study:

[I]f a consumer is using a search engine in order to find a direct answer to a query rather than a link to another site to answer it, click-through would actually represent a decrease in consumer welfare, not an increase.

In fact, the study fails to incorporate this dynamic even though it is precisely what the authors claim the study is measuring.

Further, as the WaPo editorial intimates, these universal search results (including Google Shopping results) are quite plausibly more valuable to users. As even Tim Wu and Yelp note:

No one truly disagrees that universal search, in concept, can be an important innovation that can serve consumers.

Google sees it exactly this way, of course. Here’s Tim Wu and Yelp again:

According to Google, a principal difference between the earlier cases and its current conduct is that universal search represents a pro-competitive, user-serving innovation. By deploying universal search, Google argues, it has made search better. As Eric Schmidt argues, “if we know the answer it is better for us to answer that question so [the user] doesn’t have to click anywhere, and in that sense we… use data sources that are our own because we can’t engineer it any other way.”

Of course, in this case, one would expect fewer clicks to correlate with higher value to users — precisely the opposite of the claim made by Tim Wu and Yelp, which is the surest sign that their study is faulty.

But the Commission, at least according to the evidence cited in its PR, doesn’t even seem to measure the relative value of the very different presentations of information at all, instead resting on assertions rooted in the irrelevant difference in user propensity to click on generic (10 blue links) search results depending on placement.

Add to this Pinar Akman’s important point that Google Shopping “results” aren’t necessarily search results at all, but paid advertising:

[O]nce one appreciates the fact that Google’s shopping results are simply ads for products and Google treats all ads with the same ad-relevant algorithm and all organic results with the same organic-relevant algorithm, the Commission’s order becomes impossible to comprehend. Is the Commission imposing on Google a duty to treat non-sponsored results in the same way that it treats sponsored results? If so, does this not provide an unfair advantage to comparison shopping sites over, for example, Google’s advertising partners as well as over Amazon, eBay, various retailers, etc…?

Randy Picker also picks up on this point:

But those Google shopping boxes are ads, Picker told me. “I can’t imagine what they’re thinking,” he said. “Google is in the advertising business. That’s how it makes its money. It has no obligation to put other people’s ads on its website.”

The bottom line here is that the WaPo editorial board does a better job characterizing the actual, relevant market dynamics in a single sentence than the Commission seems to have done in its lengthy releases summarizing its decision following seven full years of investigation.

The second point made by the WaPo editorial board to which I want to draw attention is equally important:

Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

The Commission dismisses this argument in its Factsheet:

The Commission Decision concerns the effect of Google’s practices on comparison shopping markets. These offer a different service to merchant platforms, such as Amazon and eBay. Comparison shopping services offer a tool for consumers to compare products and prices online and find deals from online retailers of all types. By contrast, they do not offer the possibility for products to be bought on their site, which is precisely the aim of merchant platforms. Google’s own commercial behaviour reflects these differences – merchant platforms are eligible to appear in Google Shopping whereas rival comparison shopping services are not.

But the reality is that “comparison shopping,” just like “general search,” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google (or Foundem, or Amazon, or Facebook…) happens to use doesn’t reflect the extent of substitutability between these different mechanisms.

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive. The same goes for comparison shopping.

And the fact that Amazon and eBay “offer the possibility for products to be bought on their site” doesn’t take away from the fact that they also “offer a tool for consumers to compare products and prices online and find deals from online retailers of all types.” Not only do these sites contain enormous amounts of valuable (and well-presented) information about products, including product comparisons and consumer reviews, but they also actually offer comparisons among retailers. In fact, Fifty percent of the items sold through Amazon’s platform, for example, are sold by third-party retailers — the same sort of retailers that might also show up on a comparison shopping site.

More importantly, though, as the WaPo editorial rightly notes, “[t]hose who aren’t happy anyway have other options.” Google just isn’t the indispensable gateway to the Internet (and definitely not to shopping on the Internet) that the Commission seems to think.

Today over half of product searches in the US start on Amazon. The majority of web page referrals come from Facebook. Yelp’s most engaged users now access it via its app (which has seen more than 3x growth in the past five years). And a staggering 40 percent of mobile browsing on both Android and iOS now takes place inside the Facebook app.

Then there are “closed” platforms like the iTunes store and innumerable other apps that handle copious search traffic (including shopping-related traffic) but also don’t figure in the Commission’s analysis, apparently.

In fact, billions of users reach millions of companies every day through direct browser navigation, social media, apps, email links, review sites, blogs, and countless other means — all without once touching Google.com. So-called “dark social” interactions (email, text messages, and IMs) drive huge amounts of some of the most valuable traffic on the Internet, in fact.

All of this, in turn, has led to a competitive scramble to roll out completely new technologies to meet consumers’ informational (and merchants’ advertising) needs. The already-arriving swarm of VR, chatbots, digital assistants, smart-home devices, and more will offer even more interfaces besides Google through which consumers can reach their favorite online destinations.

The point is this: Google’s competitors complaining that the world is evolving around them don’t need to rely on Google. That they may choose to do so does not saddle Google with an obligation to ensure that they can always do so.

Antitrust laws — in Europe, no less than in the US — don’t require Google or any other firm to make life easier for competitors. That’s especially true when doing so would come at the cost of consumer-welfare-enhancing innovations. The Commission doesn’t seem to have grasped this fundamental point, however.

The WaPo editorial board gets it, though:

The immense size and power of all Internet giants are a legitimate focus for the antitrust authorities on both sides of the Atlantic. Brussels vs. Google, however, seems to be a case of punishment without crime.

Regardless of the merits and soundness (or lack thereof) of this week’s European Commission Decision in the Google Shopping case — one cannot assess this until we have the text of the decision — two comments really struck me during the press conference.

First, it was said that Google’s conduct had essentially reduced innovation. If I heard correctly, this is a formidable statement. In 2016, another official EU service published stats that described Alphabet as increasing its R&D by 22% and ranked it as the world’s 4th top R&D investor. Sure it can always be better. And sure this does not excuse everything. But still. The press conference language on incentives to innovate was a bit of an oversell, to say the least.

Second, the Commission views this decision as a “precedent” or as a “framework” that will inform the way dominant Internet platforms should display, intermediate and market their services and those of their competitors. This may fuel additional complaints by other vertical search rivals against (i) Google in relation to other product lines, but also against (ii) other large platform players.

Beyond this, the Commission’s approach raises a gazillion questions of law and economics. Pending the disclosure of the economic evidence in the published decision, let me share some thoughts on a few (arbitrarily) selected legal issues.

First, the Commission has drawn the lesson of the Microsoft remedy quagmire. The Commission refrains from using a trustee to ensure compliance with the decision. This had been a bone of contention in the 2007 Microsoft appeal. Readers will recall that the Commission had imposed on Microsoft to appoint a monitoring trustee, who was supposed to advise on possible infringements in the implementation of the decision. On appeal, the Court eventually held that the Commission was solely responsible for this, and could not delegate those powers. Sure, the Commission could “retai[n] its own external expert to provide advice when it investigates the implementation of the remedies.” But no more than that.

Second, we learn that the Commission is no longer in the business of software design. Recall the failed untying of WMP and Windows — Windows Naked sold only 11,787 copies, likely bought by tech bootleggers willing to acquire the first piece of software ever designed by antitrust officials — or the browser “Choice Screen” compliance saga which eventually culminated with a €561 million fine. Nothing of this can be found here. The Commission leaves remedial design to the abstract concept of “equal treatment”.[1] This, certainly, is a (relatively) commendable approach, and one that could inspire remedies in other unilateral conduct cases, in particular, exploitative conduct ones where pricing remedies are both costly, impractical, and consequentially inefficient.

On the other hand, readers will also not fail to see the corollary implication of “equal treatment”: search neutrality could actually cut both ways, and lead to a lawful degradation in consumer welfare if Google were ever to decide to abandon rich format displays for both its own shopping services and those of rivals.

Third, neither big data nor algorithmic design is directly vilified in the case (“The Commission Decision does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages”). In fact, the Commission objects to the selective application of Google’s generic search algorithms to its own products. This is an interesting, and subtle, clarification given all the coverage that this topic has attracted in recent antitrust literature. We are in fact very close to a run of the mill claim of disguised market manipulation, not causally related to data or algorithmic technology.

Fourth, Google said it contemplated a possible appeal of the decision. Now, here’s a challenging question: can an antitrust defendant effectively exercise its right to judicial review of an administrative agency (and more generally its rights of defense), when it operates under the threat of antitrust sanctions in ongoing parallel cases investigated by the same agency (i.e., the antitrust inquiries related to Android and Ads)? This question cuts further than the Google Shopping case. Say firm A contemplates a merger with firm B in market X, while it is at the same time subject to antitrust investigations in market Z. And assume that X and Z are neither substitutes nor complements so there is little competitive relationship between both products. Can the Commission leverage ongoing antitrust investigations in market Z to extract merger concessions in market X? Perhaps more to the point, can the firm interact with the Commission as if the investigations are completely distinct, or does it have to play a more nuanced game and consider the ramifications of its interactions with the Commission in both markets?

Fifth, as to the odds of a possible appeal, I don’t believe that arguments on the economic evidence or legal theory of liability will ever be successful before the General Court of the EU. The law and doctrine in unilateral conduct cases are disturbingly — and almost irrationally — severe. As I have noted elsewhere, the bottom line in the EU case-law on unilateral conduct is to consider the genuine requirement of “harm to competition” as a rhetorical question, not an empirical one. In EU unilateral conduct law, exclusion of every and any firm is a per se concern, regardless of evidence of efficiency, entry or rivalry.

In turn, I tend to opine that Google has a stronger game from a procedural standpoint, having been left with (i) the expectation of a settlement (it played ball three times by making proposals); (ii) a corollary expectation of the absence of a fine (settlement discussions are not appropriate for cases that could end with fines); and (iii) a full seven long years of an investigatory cloud. We know from the past that EU judges like procedural issues, but like comparably less to debate the substance of the law in unilateral conduct cases. This case could thus be a test case in terms of setting boundaries on how freely the Commission can U-turn a case (the Commissioner said “take the case forward in a different way”).

Last October 26, Heritage scholar James Gattuso and I published an essay in The Daily Signal, explaining that the proposed vertical merger (a merger between firms at different stages of the distribution chain) of AT&T and Time Warner (currently undergoing Justice Department antitrust review) may have the potential to bestow substantial benefits on consumers – and that congressional calls to block it, uninformed by fact-based economic analysis, could prove detrimental to consumer welfare.  We explained:

[E]ven though the proposed union of AT&T and Time Warner is not guaranteed to benefit shareholders or consumers, that is no reason for the government to block it. Absent a strong showing of likely harm to the competitive process (which does not appear to be the case here), the government has no business interfering in corporate acquisitions.  Market forces should be allowed to sort out the welfare-enhancing transactional sheep from the unprofitable goats.  Shareholders are in a position to “vote with their feet” and reward or punish a merged company, based on information generated in the marketplace. 

[M]arket transactors are better placed and better incentivized than bureaucrats to uncover and apply the information needed to yield an efficient allocation of resources.

In short, government meddling in mergers in the absence of likely market failure (and of reason to believe that the government’s actions will yield results superior to those of an imperfect market) is a recipe for a diminution in—not an improvement in—consumer welfare.

Furthermore, by arbitrarily intervening in proposed mergers that are not anti-competitive, government disincentivizes firms from acting boldly to seek out new opportunities to create wealth and enhance the welfare of consumers.

What’s worse, the knowledge that government may intervene in mergers without regard to their likely competitive effects will prompt wasteful expenditures by special interests opposing particular transactions, causing a further diminution in economic welfare.

Unfortunately, the congressional critics of this deal are still out there, louder than ever, and, once again, need to be reminded about the dangers of unwarranted antitrust interventions – and the problem with “big is bad” rhetoric.  Scalia Law School Professor (and former Federal Trade Commissioner) Joshua Wright ably deconstructs the problems with the latest Capitol Hill  criticisms of this proposed merger, set forth in a June 21 letter to the Justice Department from eleven U.S. Senators (including Elizabeth Warren, Al Franken, and Bernie Sanders).  As Professor Wright explains in a June 26 article published by The Hill:

Over the past several decades, there has been resounding and bipartisan agreement — amongst mainstream antitrust economists, practitioners, enforcement agencies, and even politicians — that while mergers between vertically aligned companies, like AT&T and Time Warner, can in rare circumstances harm competition, they usually make consumers better off. The opposition letter is a call to disrupt that consensus with a “new” view that vertical mergers are presumptively a bad deal for consumers and violate the antitrust laws.

The call for an antitrust revolution with respect to vertical mergers should not go unanswered. Revolution actually overstates things. The “new” antitrust is really a thinly veiled attempt to return to the antitrust approach of the 1960s where everything “big” was bad and virtually all deals, vertical ones included, violated the antitrust laws. That approach gained traction in part because it is easy to develop supporting rhetoric that is inflammatory and easily digestible. . . .

[However,] [a]s a matter of fact, the overwhelming weight of economic analysis and empirical evidence serves as a much-needed dose of cold water for the fiery rhetoric in the opposition letter and the commonly held intuition that all mergers between big firms make consumers worse off. . . .

[C]onsider the conclusion of a widely cited summary of dozens of studies authored by Francine LaFontaine and Margaret Slade, two very well respected industrial organization economists (one who served as director of the U.S. Federal Trade Commission’s bureau of economics during the Obama administration). It found that “consumers are often worse off when governments require vertical separation in markets where firms would have chosen otherwise.” Or consider the conclusion of four former enforcement agency economists reviewing the same body of evidence that “there is a paucity of support for the proposition that vertical restraints [or] vertical integration are likely to harm consumers.”

This evidence by no means suggests vertical mergers are incapable of harming consumers or violating the antitrust laws. The data do suggest an evidence-based antitrust enforcement approach aimed at protecting consumers will not presume that they are harmful without careful, rigorous, and objective analysis. Antitrust analysis is — or at least should be — a fact-specific exercise. Weighing concrete economic evidence is critical when assessing mergers, particularly when assessing vertical mergers where procompetitive virtues are almost always present. . . .

The economic and legal framework for analyzing vertical mergers is well understood by the U.S. Department of Justice’s antitrust division and its staff of expert lawyers and economists. The antitrust division has not hesitated to determine an appropriate remedy in the rare instance where a vertical merger has been found likely to harm competition. The [Senators’] opposition letter is correct that a careful and rigorous analysis of the proposed acquisition is called for — as is the case with all mergers. That review process should, however, be guided by careful and objective analysis and not the fiery political rhetoric [of the Senators’ letter].

Under the leadership of soon-to-be U.S. Assistant Attorney General Makan Delrahim, an experienced antitrust lawyer and antitrust enforcement agency veteran, the Justice Department antitrust division staff will be empowered to conduct precisely that type of analysis and reach a decision that best protects competition and consumers.

Professor Wright’s excellent essay merits being read in full.

Today the International Center for Law & Economics (ICLE) Antitrust and Consumer Protection Research Program released a new white paper by Geoffrey A. Manne and Allen Gibby entitled:

A Brief Assessment of the Procompetitive Effects of Organizational Restructuring in the Ag-Biotech Industry

Over the past two decades, rapid technological innovation has transformed the industrial organization of the ag-biotech industry. These developments have contributed to an impressive increase in crop yields, a dramatic reduction in chemical pesticide use, and a substantial increase in farm profitability.

One of the most striking characteristics of this organizational shift has been a steady increase in consolidation. The recent announcements of mergers between Dow and DuPont, ChemChina and Syngenta, and Bayer and Monsanto suggest that these trends are continuing in response to new market conditions and a marked uptick in scientific and technological advances.

Regulators and industry watchers are often concerned that increased consolidation will lead to reduced innovation, and a greater incentive and ability for the largest firms to foreclose competition and raise prices. But ICLE’s examination of the underlying competitive dynamics in the ag-biotech industry suggests that such concerns are likely unfounded.

In fact, R&D spending within the seeds and traits industry increased nearly 773% between 1995 and 2015 (from roughly $507 million to $4.4 billion), while the combined market share of the six largest companies in the segment increased by more than 550% (from about 10% to over 65%) during the same period.

Firms today are consolidating in order to innovate and remain competitive in an industry replete with new entrants and rapidly evolving technological and scientific developments.

According to ICLE’s analysis, critics have unduly focused on the potential harms from increased integration, without properly accounting for the potential procompetitive effects. Our brief white paper highlights these benefits and suggests that a more nuanced and restrained approach to enforcement is warranted.

Our analysis suggests that, as in past periods of consolidation, the industry is well positioned to see an increase in innovation as these new firms unite complementary expertise to pursue more efficient and effective research and development. They should also be better able to help finance, integrate, and coordinate development of the latest scientific and technological developments — particularly in rapidly growing, data-driven “digital farming” —  throughout the industry.

Download the paper here.

And for more on the topic, revisit TOTM’s recent blog symposium, “Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries,” here.

According to Cory Doctorow over at Boing Boing, Tim Wu has written an open letter to W3C Chairman Sir Timothy Berners-Lee, expressing concern about a proposal to include Encrypted Media Extensions (EME) as part of the W3C standards. W3C has a helpful description of EME:

Encrypted Media Extensions (EME) is currently a draft specification… [for] an Application Programming Interface (API) that enables Web applications to interact with content protection systems to allow playback of encrypted audio and video on the Web. The EME specification enables communication between Web browsers and digital rights management (DRM) agent software to allow HTML5 video play back of DRM-wrapped content such as streaming video services without third-party media plugins. This specification does not create nor impose a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems.

Wu’s letter expresses his concern about hardwiring DRM into the technical standards supporting an open internet. He writes:

I wanted to write to you and respectfully ask you to seriously consider extending a protective covenant to legitimate circumventers who have cause to bypass EME, should it emerge as a W3C standard.

Wu asserts that this “protective covenant” is needed because, without it, EME will confer too much power on internet “chokepoints”:

The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected…. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know… It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.”

But conflating the Microsoft case with a relatively simple browser feature meant to enable all content providers to use any third-party DRM to secure their content — in other words, to enhance interoperability — is beyond the pale. If we take the Microsoft case as Wu would like, it was about one firm controlling, far and away, the largest share of desktop computing installations, a position that Wu and his fellow travelers believed gave Microsoft an unreasonable leg up in forcing usage of Internet Explorer to the exclusion of Netscape. With EME, the W3C is not maneuvering the standard so that a single DRM provider comes to protect all content on the web, or could even hope to do so. EME enables content distributors to stream content through browsers using their own DRM backend. There is simply nothing in that standard that enables a firm to dominate content distribution or control huge swaths of the Internet to the exclusion of competitors.

Unless, of course, you just don’t like DRM and you think that any technology that enables content producers to impose restrictions on consumption of media creates a “chokepoint.” But, again, this position is borderline nonsense. Such a “chokepoint” is no more restrictive than just going to Netflix’s app (or Hulu’s, or HBO’s, or Xfinity’s, or…) and relying on its technology. And while it is no more onerous than visiting Netflix’s app, it creates greater security on the open web such that copyright owners don’t need to resort to proprietary technologies and apps for distribution. And, more fundamentally, Wu’s position ignores the role that access and usage controls are playing in creating online markets through diversified product offerings

Wu appears to believe, or would have his readers believe, that W3C is considering the adoption of a mandatory standard that would modify core aspects of the network architecture, and that therefore presents novel challenges to the operation of the internet. But this is wrong in two key respects:

  1. Except in the extremely limited manner as described below by the W3C, the EME extension does not contain mandates, and is designed only to simplify the user experience in accessing content that would otherwise require plug-ins; and
  2. These extensions are already incorporated into the major browsers. And of course, most importantly for present purposes, the standard in no way defines or harmonizes the use of DRM.

The W3C has clearly and succinctly explained the operation of the proposed extension:

The W3C is not creating DRM policies and it is not requiring that HTML use DRM. Organizations choose whether or not to have DRM on their content. The EME API can facilitate communication between browsers and DRM providers but the only mandate is not DRM but a form of key encryption (Clear Key). EME allows a method of playback of encrypted content on the Web but W3C does not make the DRM technology nor require it. EME is an extension. It is not required for HTML nor HMTL5 video.

Like many internet commentators, Tim Wu fundamentally doesn’t like DRM, and his position here would appear to reflect his aversion to DRM rather than a response to the specific issues before the W3C. Interestingly, in arguing against DRM nearly a decade ago, Wu wrote:

Finally, a successful locking strategy also requires intense cooperation between many actors – if you protect a song with “superlock,” and my CD player doesn’t understand that, you’ve just created a dead product. (Emphasis added)

In other words, he understood the need for agreements in vertical distribution chains in order to properly implement protection schemes — integration that he opposes here (not to suggest that he supported them then, but only to highlight the disconnect between recognizing the need for coordination and simultaneously trying to prevent it).

Vint Cerf (himself no great fan of DRM — see here, for example) has offered a number of thoughtful responses to those, like Wu, who have objected to the proposed standard. Cerf writes on the ISOC listserv:

EMEi is plainly very general. It can be used to limit access to virtually any digital content, regardless of IPR status. But, in some sense, anyone wishing to restrict access to some service/content is free to do so (there are other means such as login access control, end/end encryption such as TLS or IPSEC or QUIC). EME is yet another method for doing that. Just because some content is public domain does not mean that every use of it must be unprotected, does it?

And later in the thread he writes:

Just because something is public domain does not mean someone can’t lock it up. Presumably there will be other sources that are not locked. I can lock up my copy of Gulliver’s Travels and deny you access except by some payment, but if it is public domain someone else may have a copy you can get. In any case, you can’t deny others the use of the content IF THEY HAVE IT. You don’t have to share your copy of public domain with anyone if you don’t want to.

Just so. It’s pretty hard to see the competition problems that could arise from facilitating more content providers making content available on the open web.

In short, Wu wants the W3C to develop limitations on rules when there are no relevant rules to modify. His dislike of DRM obscures his vision of the limited nature of the EME proposal which would largely track, rather than lead, the actions already being undertaken by the principal commercial actors on the internet, and which merely creates a structure for facilitating voluntary commercial transactions in ways that enhance the user experience.

The W3C process will not, as Wu intimates, introduce some pernicious, default protection system that would inadvertently lock down content; rather, it would encourage the development of digital markets on the open net rather than (or in addition to) through the proprietary, vertical markets where they are increasingly found today. Wu obscures reality rather than illuminating it through his poorly considered suggestion that EME will somehow lead to a new set of defaults that threaten core freedoms.

Finally, we can’t help but comment on Wu’s observation that

My larger point is that I think the history of the anti-circumvention laws suggests is (sic) hard to predict how [freedom would be affected]– no one quite predicted the inkjet market would be affected. But given the power of those laws, the potential for anti-competitive consequences certainly exists.

Let’s put aside the fact that W3C is not debating the laws surrounding circumvention, nor, as noted, developing usage rules. It remains troubling that Wu’s belief there are sometimes unintended consequences of actions (and therefore a potential for harm) would be sufficient to lead him to oppose a change to the status quo — as if any future, potential risk necessarily outweighs present, known harms. This is the Precautionary Principle on steroids. The EME proposal grew out of a desire to address impediments that prevent the viability and growth of online markets that sufficiently ameliorate the non-hypothetical harms of unauthorized uses. The EME proposal is a modest step towards addressing a known universe. A small step, but something to celebrate, not bemoan.

Geoffrey A. Manne is Executive Director of the International Center for Law & Economics

Dynamic versus static competition

Ever since David Teece and coauthors began writing about antitrust and innovation in high-tech industries in the 1980s, we’ve understood that traditional, price-based antitrust analysis is not intrinsically well-suited for assessing merger policy in these markets.

For high-tech industries, performance, not price, is paramount — which means that innovation is key:

Competition in some markets may take the form of Schumpeterian rivalry in which a succession of temporary monopolists displace one another through innovation. At any one time, there is little or no head-to-head price competition but there is significant ongoing innovation competition.

Innovative industries are often marked by frequent disruptions or “paradigm shifts” rather than horizontal market share contests, and investment in innovation is an important signal of competition. And competition comes from the continual threat of new entry down the road — often from competitors who, though they may start with relatively small market shares, or may arise in different markets entirely, can rapidly and unexpectedly overtake incumbents.

Which, of course, doesn’t mean that current competition and ease of entry are irrelevant. Rather, because, as Joanna Shepherd noted, innovation should be assessed across the entire industry and not solely within merging firms, conduct that might impede new, disruptive, innovative entry is indeed relevant.

But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.

In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”

What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.

We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.

As Katz and Shelanski note:

To assess fully the impact of a merger on market performance, merger authorities and courts must examine how a proposed transaction changes market participants’ incentives and abilities to undertake investments in innovation.

At the same time, they point out that

Innovation can dramatically affect the relationship between the pre-merger marketplace and what is likely to happen if the proposed merger is consummated…. [This requires consideration of] how innovation will affect the evolution of market structure and competition. Innovation is a force that could make static measures of market structure unreliable or irrelevant, and the effects of innovation may be highly relevant to whether a merger should be challenged and to the kind of remedy antitrust authorities choose to adopt. (Emphasis added).

Dynamic competition in the ag-biotech industry

These dynamics seem to be playing out in the ag-biotech industry. (For a detailed look at how the specific characteristics of innovation in the ag-biotech industry have shaped industry structure, see, e.g., here (pdf)).  

One inconvenient truth for the “concentration reduces innovation” crowd is that, as the industry has experienced more consolidation, it has also become more, not less, productive and innovative. Between 1995 and 2015, for example, the market share of the largest seed producers and crop protection firms increased substantially. And yet, over the same period, annual industry R&D spending went up nearly 750 percent. Meanwhile, the resulting innovations have increased crop yields by 22%, reduced chemical pesticide use by 37%, and increased farmer profits by 68%.

In her discussion of the importance of considering the “innovation ecosystem” in assessing the innovation effects of mergers in R&D-intensive industries, Joanna Shepherd noted that

In many consolidated firms, increases in efficiency and streamlining of operations free up money and resources to source external innovation. To improve their future revenue streams and market share, consolidated firms can be expected to use at least some of the extra resources to acquire external innovation. This increase in demand for externally-sourced innovation increases the prices paid for external assets, which, in turn, incentivizes more early-stage innovation in small firms and biotech companies. Aggregate innovation increases in the process!

The same dynamic seems to play out in the ag-biotech industry, as well:

The seed-biotechnology industry has been reliant on small and medium-sized enterprises (SMEs) as sources of new innovation. New SME startups (often spinoffs from university research) tend to specialize in commercial development of a new research tool, genetic trait, or both. Significant entry by SMEs into the seed-biotechnology sector began in the late 1970s and early 1980s, with a second wave of new entrants in the late 1990s and early 2000s. In recent years, exits have outnumbered entrants, and by 2008 just over 30 SMEs specializing in crop biotechnology were still active. The majority of the exits from the industry were the result of acquisition by larger firms. Of 27 crop biotechnology SMEs that were acquired between 1985 and 2009, 20 were acquired either directly by one of the Big 6 or by a company that itself was eventually acquired by a Big 6 company.

While there is more than one way to interpret these statistics (and they are often used by merger opponents, in fact, to lament increasing concentration), they are actually at least as consistent with an increase in innovation through collaboration (and acquisition) as with a decrease.

For what it’s worth, this is exactly how the startup community views the innovation ecosystem in the ag-biotech industry, as well. As the latest AgFunder AgTech Investing Report states:

The large agribusinesses understand that new innovation is key to their future, but the lack of M&A [by the largest agribusiness firms in 2016] highlighted their uncertainty about how to approach it. They will need to make more acquisitions to ensure entrepreneurs keep innovating and VCs keep investing.

It’s also true, as Diana Moss notes, that

Competition maximizes the potential for numerous collaborations. It also minimizes incentives to refuse to license, to impose discriminatory restrictions in technology licensing agreements, or to tacitly “agree” not to compete…. All of this points to the importance of maintaining multiple, parallel R&D pipelines, a notion that was central to the EU’s decision in Dow-DuPont.

And yet collaboration and licensing have long been prevalent in this industry. Examples are legion, but here are just a few significant ones:

  • Monsanto’s “global licensing agreement for the use of the CRISPR-Cas genome-editing technology in agriculture with the Broad Institute of MIT and Harvard.”
  • Dow and Arcadia Biosciences’ “strategic collaboration to develop and commercialize new breakthrough yield traits and trait stacks in corn.”
  • Monsanto and the University of Nebraska-Lincoln’s “licensing agreement to develop crops tolerant to the broadleaf herbicide dicamba. This agreement is based on discoveries by UNL plant scientists.”

Both large and small firms in the ag-biotech industry continually enter into new agreements like these. See, e.g., here and here for a (surely incomplete) list of deals in 2016 alone.

At the same time, across the industry, new entry has been rampant despite increased M&A activity among the largest firms. Recent years have seen venture financing in AgTech skyrocket — from $400 million in 2010 to almost $5 billion in 2015 — and hundreds of startups now enter the industry annually.

The pending mergers

Today’s pending mergers are consistent with this characterization of a dynamic market in which structure is being driven by incentives to innovate, rather than monopolize. As Michael Sykuta points out,

The US agriculture sector has been experiencing consolidation at all levels for decades, even as the global ag economy has been growing and becoming more diverse. Much of this consolidation has been driven by technological changes that created economies of scale, both at the farm level and beyond.

These deals aren’t fundamentally about growing production capacity, expanding geographic reach, or otherwise enhancing market share; rather, each is a fundamental restructuring of the way the companies do business, reflecting today’s shifting agricultural markets, and the advanced technology needed to respond to them.

Technological innovation is unpredictable, often serendipitous, and frequently transformative of the ways firms organize and conduct their businesses. A company formed to grow and sell hybrid seeds in the 1920s, for example, would either have had to evolve or fold by the end of the century. Firms today will need to develop (or purchase) new capabilities and adapt to changing technology, scientific knowledge, consumer demand, and socio-political forces. The pending mergers seemingly fit exactly this mold.

As Allen Gibby notes, these mergers are essentially vertical combinations of disparate, specialized pieces of an integrated whole. Take the proposed Bayer/Monsanto merger, for example. Bayer is primarily a chemicals company, developing advanced chemicals to protect crops and enhance crop growth. Monsanto, on the other hand, primarily develops seeds and “seed traits” — advanced characteristics that ensure the heartiness of the seeds, give them resistance to herbicides and pesticides, and speed their fertilization and growth. In order to translate the individual advances of each into higher yields, it is important that these two functions work successfully together. Doing so enhances crop growth and protection far beyond what, say, spreading manure can accomplish — or either firm could accomplish working on its own.

The key is that integrated knowledge is essential to making this process function. Developing seed traits to work well with (i.e., to withstand) certain pesticides requires deep knowledge of the pesticide’s chemical characteristics, and vice-versa. Processing huge amounts of data to determine when to apply chemical treatments or to predict a disease requires not only that the right information is collected, at the right time, but also that it is analyzed in light of the unique characteristics of the seeds and chemicals. Increased communications and data-sharing between manufacturers increases the likelihood that farmers will use the best products available in the right quantity and at the right time in each field.

Vertical integration solves bargaining and long-term planning problems by unifying the interests (and the management) of these functions. Instead of arm’s length negotiation, a merged Bayer/Monsanto, for example, may better maximize R&D of complicated Ag/chem products through fully integrated departments and merged areas of expertise. A merged company can also coordinate investment decisions (instead of waiting up to 10 years to see what the other company produces), avoid duplication of research, adapt to changing conditions (and the unanticipated course of research), pool intellectual property, and bolster internal scientific capability more efficiently. All told, the merged company projects spending about $16 billion on R&D over the next six years. Such coordinated investment will likely garner far more than either company could from separately spending even the same amount to develop new products. 

Controlling an entire R&D process and pipeline of traits for resistance, chemical treatments, seeds, and digital complements would enable the merged firm to better ensure that each of these products works together to maximize crop yields, at the lowest cost, and at greater speed. Consider the advantages that Apple’s tightly-knit ecosystem of software and hardware provides to computer and device users. Such tight integration isn’t the only way to compete (think Android), but it has frequently proven to be a successful model, facilitating some functions (e.g., handoff between Macs and iPhones) that are difficult if not impossible in less-integrated systems. And, it bears noting, important elements of Apple’s innovation have come through acquisition….

Conclusion

As LaFontaine and Slade have made clear, theoretical concerns about the anticompetitive consequences of vertical integrations are belied by the virtual absence of empirical support:

Under most circumstances, profit–maximizing vertical–integration and merger decisions are efficient, not just from the firms’ but also from the consumers’ points of view.

Other antitrust scholars are skeptical of vertical-integration fears because firms normally have strong incentives to deal with providers of complementary products. Bayer and Monsanto, for example, might benefit enormously from integration, but if competing seed producers seek out Bayer’s chemicals to develop competing products, there’s little reason for the merged firm to withhold them: Even if the new seeds out-compete Monsanto’s, Bayer/Monsanto can still profit from providing the crucial input. Its incentive doesn’t necessarily change if the merger goes through, and whatever “power” Bayer has as an input is a function of its scientific know-how, not its merger with Monsanto.

In other words, while some competitors could find a less hospitable business environment, consumers will likely suffer no apparent ill effects, and continue to receive the benefits of enhanced product development and increased productivity.

That’s what we’d expect from innovation-driven integration, and antitrust enforcers should be extremely careful before thwarting or circumscribing these mergers lest they end up thwarting, rather than promoting, consumer welfare.

Thanks to Truth on the Market for the opportunity to guest blog, and to ICLE for inviting me to join as a Senior Scholar! I’m honoured to be involved with both of these august organizations.

In Brussels, the talk of the town is that the European Commission (“Commission”) is casting a new eye on the old antitrust conjecture that prophesizes a negative relationship between industry concentration and innovation. This issue arises in the context of the review of several mega-mergers in the pharmaceutical and AgTech (i.e., seed genomics, biochemicals, “precision farming,” etc.) industries.

The antitrust press reports that the Commission has shown signs of interest for the introduction of a new theory of harm: the Significant Impediment to Industry Innovation (“SIII”) theory, which would entitle the remediation of mergers on the sole ground that a transaction significantly impedes innovation incentives at the industry level. In a recent ICLE White Paper, I discuss the desirability and feasibility of the introduction of this doctrine for the assessment of mergers in R&D-driven industries.

The introduction of SIII analysis in EU merger policy would no doubt be a sea change, as compared to past decisional practice. In previous cases, the Commission has paid heed to the effects of a merger on incentives to innovate, but the assessment has been limited to the effect on the innovation incentives of the merging parties in relation to specific current or future products. The application of the SIII theory, however, would entail an assessment of a possible reduction of innovation in (i) a given industry as a whole; and (ii) not in relation to specific product applications.

The SIII theory would also be distinct from the innovation markets” framework occasionally applied in past US merger policy and now marginalized. This framework considers the effect of a merger on separate upstream “innovation markets,i.e., on the R&D process itself, not directly linked to a downstream current or future product market. Like SIII, innovation markets analysis is interesting in that the identification of separate upstream innovation markets implicitly recognises that the players active in those markets are not necessarily the same as those that compete with the merging parties in downstream product markets.

SIII is way more intrusive, however, because R&D incentives are considered in the abstract, without further obligation on the agency to identify structured R&D channels, pipeline products, and research trajectories.

With this, any case for an expansion of the Commission’s power to intervene against mergers in certain R&D-driven industries should rely on sound theoretical and empirical infrastructure. Yet, despite efforts by the most celebrated Nobel-prize economists of the past decades, the economics that underpin the relation between industry concentration and innovation incentives remains an unfathomable mystery. As Geoffrey Manne and Joshua Wright have summarized in detail, the existing literature is indeterminate, at best. As they note, quoting Rich Gilbert,

[a] careful examination of the empirical record concludes that the existing body of theoretical and empirical literature on the relationship between competition and innovation “fails to provide general support for the Schumpeterian hypothesis that monopoly promotes either investment in research and development or the output of innovation” and that “the theoretical and empirical evidence also does not support a strong conclusion that competition is uniformly a stimulus to innovation.”

Available theoretical research also fails to establish a directional relationship between mergers and innovation incentives. True, soundbites from antitrust conferences suggest that the Commission’s Chief Economist Team has developed a deterministic model that could be brought to bear on novel merger policy initiatives. Yet, given the height of the intellectual Everest under discussion, we remain dubious (yet curious).

And, as noted, the available empirical data appear inconclusive. Consider a relatively concentrated industry like the seed and agrochemical sector. Between 2009 and 2016, all big six agrochemical firms increased their total R&D expenditure and their R&D intensity either increased or remained stable. Note that this has taken place in spite of (i) a significant increase in concentration among the largest firms in the industry; (ii) dramatic drop in global agricultural commodity prices (which has adversely affected several agrochemical businesses); and (iii) the presence of strong appropriability devices, namely patent rights.

This brief industry example (that I discuss more thoroughly in the paper) calls our attention to a more general policy point: prior to poking and prodding with novel theories of harm, one would expect an impartial antitrust examiner to undertake empirical groundwork, and screen initial intuitions of adverse effects of mergers on innovation through the lenses of observable industry characteristics.

At a more operational level, SIII also illustrates the difficulties of using indirect proxies of innovation incentives such as R&D figures and patent statistics as a preliminary screening tool for the assessment of the effects of the merger. In my paper, I show how R&D intensity can increase or decrease for a variety of reasons that do not necessarily correlate with an increase or decrease in the intensity of innovation. Similarly, I discuss why patent counts and patent citations are very crude indicators of innovation incentives. Over-reliance on patent counts and citations can paint a misleading picture of the parties’ strength as innovators in terms of market impact: not all patents are translated into products that are commercialised or are equal in terms of commercial value.

As a result (and unlike the SIII or innovation markets approaches), the use of these proxies as a measure of innovative strength should be limited to instances where the patent clearly has an actual or potential commercial application in those markets that are being assessed. Such an approach would ensure that patents with little or no impact on innovation competition in a market are excluded from consideration. Moreover, and on pain of stating the obvious, patents are temporal rights. Incentives to innovate may be stronger as a protected technological application approaches patent expiry. Patent counts and citations, however, do not discount the maturity of patents and, in particular, do not say much about whether the patent is far from or close to its expiry date.

In order to overcome the limitations of crude quantitative proxies, it is in my view imperative to complement an empirical analysis with industry-specific qualitative research. Central to the assessment of the qualitative dimension of innovation competition is an understanding of the key drivers of innovation in the investigated industry. In the agrochemical industry, industry structure and market competition may only be one amongst many other factors that promote innovation. Economic models built upon Arrow’s replacement effect theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fail to capture that successful agrochemical products create new technology frontiers.

Thus, for example, progress in crop protection products – and, in particular, in pest- and insect-resistant crops – had fuelled research investments in pollinator protection technology. Moreover, the impact of wider industry and regulatory developments on incentives to innovate and market structure should not be ignored (for example, falling crop commodity prices or regulatory restrictions on the use of certain products). Last, antitrust agencies are well placed to understand that beyond R&D and patent statistics, there is also a degree of qualitative competition in the innovation strategies that are pursued by agrochemical players.

My paper closes with a word of caution. No compelling case has been advanced to support a departure from established merger control practice with the introduction of SIII in pharmaceutical and agrochemical mergers. The current EU merger control framework, which enables the Commission to conduct a prospective analysis of the parties’ R&D incentives in current or future product markets, seems to provide an appropriate safeguard against anticompetitive transactions.

In his 1974 Nobel Prize Lecture, Hayek criticized the “scientific error” of much economic research, which assumes that intangible, correlational laws govern observable and measurable phenomena. Hayek warned that economics is like biology: both fields focus on “structures of essential complexity” which are recalcitrant to stylized modeling. Interestingly, competition was one of the examples expressly mentioned by Hayek in his lecture:

[T]he social sciences, like much of biology but unlike most fields of the physical sciences, have to deal with structures of essential complexity, i.e. with structures whose characteristic properties can be exhibited only by models made up of relatively large numbers of variables. Competition, for instance, is a process which will produce certain results only if it proceeds among a fairly large number of acting persons.

What remains from this lecture is a vibrant call for humility in policy making, at a time where some constituencies within antitrust agencies show signs of interest in revisiting the relationship between concentration and innovation. And if Hayek’s convoluted writing style is not the most accessible of all, the title captures it all: “The Pretense of Knowledge.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

On Friday the the International Center for Law & Economics filed comments with the FCC in response to Chairman Wheeler’s NPRM (proposed rules) to “unlock” the MVPD (i.e., cable and satellite subscription video, essentially) set-top box market. Plenty has been written on the proposed rulemaking—for a few quick hits (among many others) see, e.g., Richard Bennett, Glenn Manishin, Larry Downes, Stuart Brotman, Scott Wallsten, and me—so I’ll dispense with the background and focus on the key points we make in our comments.

Our comments explain that the proposal’s assertion that the MVPD set-top box market isn’t competitive is a product of its failure to appreciate the dynamics of the market (and its disregard for economics). Similarly, the proposal fails to acknowledge the complexity of the markets it intends to regulate, and, in particular, it ignores the harmful effects on content production and distribution the rules would likely bring about.

“Competition, competition, competition!” — Tom Wheeler

“Well, uh… just because I don’t know what it is, it doesn’t mean I’m lying.” — Claude Elsinore

At root, the proposal is aimed at improving competition in a market that is already hyper-competitive. As even Chairman Wheeler has admitted,

American consumers enjoy unprecedented choice in how they view entertainment, news and sports programming. You can pretty much watch what you want, where you want, when you want.

Of course, much of this competition comes from outside the MVPD market, strictly speaking—most notably from OVDs like Netflix. It’s indisputable that the statute directs the FCC to address the MVPD market and the MVPD set-top box market. But addressing competition in those markets doesn’t mean you simply disregard the world outside those markets.

The competitiveness of a market isn’t solely a function of the number of competitors in the market. Even relatively constrained markets like these can be “fully competitive” with only a few competing firms—as is the case in every market in which MVPDs operate (all of which are presumed by the Commission to be subject to “effective competition”).

The truly troubling thing, however, is that the FCC knows that MVPDs compete with OVDs, and thus that the competitiveness of the “MVPD market” (and the “MVPD set-top box market”) isn’t solely a matter of direct, head-to-head MVPD competition.

How do we know that? As I’ve recounted before, in a recent speech FCC General Counsel Jonathan Sallet approvingly explained that Commission staff recommended rejecting the Comcast/Time Warner Cable merger precisely because of the alleged threat it posed to OVD competitors. In essence, Sallet argued that Comcast sought to undertake a $45 billion merger primarily—if not solely—in order to ameliorate the competitive threat to its subscription video services from OVDs:

Simply put, the core concern came down to whether the merged firm would have an increased incentive and ability to safeguard its integrated Pay TV business model and video revenues by limiting the ability of OVDs to compete effectively.…

Thus, at least when it suits it, the Chairman’s office appears not only to believe that this competitive threat is real, but also that Comcast, once the largest MVPD in the country, believes so strongly that the OVD competitive threat is real that it was willing to pay $45 billion for a mere “increased ability” to limit it.

UPDATE 4/26/2016

And now the FCC has approved the Charter/Time Warner Cable, imposing conditions that, according to Wheeler,

focus on removing unfair barriers to video competition. First, New Charter will not be permitted to charge usage-based prices or impose data caps. Second, New Charter will be prohibited from charging interconnection fees, including to online video providers, which deliver large volumes of internet traffic to broadband customers. Additionally, the Department of Justice’s settlement with Charter both outlaws video programming terms that could harm OVDs and protects OVDs from retaliation—an outcome fully supported by the order I have circulated today.

If MVPDs and OVDs don’t compete, why would such terms be necessary? And even if the threat is merely potential competition, as we note in our comments (citing to this, among other things),

particularly in markets characterized by the sorts of technological change present in video markets, potential competition can operate as effectively as—or even more effectively than—actual competition to generate competitive market conditions.

/UPDATE

Moreover, the proposal asserts that the “market” for MVPD set-top boxes isn’t competitive because “consumers have few alternatives to leasing set-top boxes from their MVPDs, and the vast majority of MVPD subscribers lease boxes from their MVPD.”

But the MVPD set-top box market is an aftermarket—a secondary market; no one buys set-top boxes without first buying MVPD service—and always or almost always the two are purchased at the same time. As Ben Klein and many others have shown, direct competition in the aftermarket need not be plentiful for the market to nevertheless be competitive.

Whether consumers are fully informed or uninformed, consumers will pay a competitive package price as long as sufficient competition exists among sellers in the [primary] market.

The competitiveness of the MVPD market in which the antecedent choice of provider is made incorporates consumers’ preferences regarding set-top boxes, and makes the secondary market competitive.

The proposal’s superficial and erroneous claim that the set-top box market isn’t competitive thus reflects bad economics, not competitive reality.

But it gets worse. The NPRM doesn’t actually deny the importance of OVDs and app-based competitors wholesale — it only does so when convenient. As we note in our Comments:

The irony is that the NPRM seeks to give a leg up to non-MVPD distribution services in order to promote competition with MVPDs, while simultaneously denying that such competition exists… In order to avoid triggering [Section 629’s sunset provision,] the Commission is forced to pretend that we still live in the world of Blockbuster rentals and analog cable. It must ignore the Netflix behind the curtain—ignore the utter wealth of video choices available to consumers—and focus on the fact that a consumer might have a remote for an Apple TV sitting next to her Xfinity remote.

“Yes, but you’re aware that there’s an invention called television, and on that invention they show shows?” — Jules Winnfield

The NPRM proposes to create a world in which all of the content that MVPDs license from programmers, and all of their own additional services, must be provided to third-party device manufacturers under a zero-rate compulsory license. Apart from the complete absence of statutory authority to mandate such a thing (or, I should say, apart from statutory language specifically prohibiting such a thing), the proposed rules run roughshod over the copyrights and negotiated contract rights of content providers:

The current rulemaking represents an overt assault on the web of contracts that makes content generation and distribution possible… The rules would create a new class of intermediaries lacking contractual privity with content providers (or MVPDs), and would therefore force MVPDs to bear the unpredictable consequences of providing licensed content to third-parties without actual contracts to govern those licenses…

Because such nullification of license terms interferes with content owners’ right “to do and to authorize” their distribution and performance rights, the rules may facially violate copyright law… [Moreover,] the web of contracts that support the creation and distribution of content are complicated, extensively negotiated, and subject to destabilization. Abrogating the parties’ use of the various control points that support the financing, creation, and distribution of content would very likely reduce the incentive to invest in new and better content, thereby rolling back the golden age of television that consumers currently enjoy.

You’ll be hard-pressed to find any serious acknowledgement in the NPRM that its rules could have any effect on content providers, apart from this gem:

We do not currently have evidence that regulations are needed to address concerns raised by MVPDs and content providers that competitive navigation solutions will disrupt elements of service presentation (such as agreed-upon channel lineups and neighborhoods), replace or alter advertising, or improperly manipulate content…. We also seek comment on the extent to which copyright law may protect against these concerns, and note that nothing in our proposal will change or affect content creators’ rights or remedies under copyright law.

The Commission can’t rely on copyright to protect against these concerns, at least not without admitting that the rules require MVPDs to violate copyright law and to breach their contracts. And in fact, although it doesn’t acknowledge it, the NPRM does require the abrogation of content owners’ rights embedded in licenses negotiated with MVPD distributors to the extent that they conflict with the terms of the rule (which many of them must).   

“You keep using that word. I do not think it means what you think it means.” — Inigo Montoya

Finally, the NPRM derives its claimed authority for these rules from an interpretation of the relevant statute (Section 629 of the Communications Act) that is absurdly unreasonable. That provision requires the FCC to enact rules to assure the “commercial availability” of set-top boxes from MVPD-unaffiliated vendors. According to the NPRM,

we cannot assure a commercial market for devices… unless companies unaffiliated with an MVPD are able to offer innovative user interfaces and functionality to consumers wishing to access that multichannel video programming.

This baldly misconstrues a term plainly meant to refer to the manner in which consumers obtain their navigation devices, not how those devices should function. It also contradicts the Commission’s own, prior readings of the statute:

As structured, the rules will place a regulatory thumb on the scale in favor of third-parties and to the detriment of MVPDs and programmers…. [But] Congress explicitly rejected language that would have required unbundling of MVPDs’ content and services in order to promote other distribution services…. Where Congress rejected language that would have favored non-MVPD services, the Commission selectively interprets the language Congress did employ in order to accomplish exactly what Congress rejected.

And despite the above noted problems (and more), the Commission has failed to do even a cursory economic evaluation of the relative costs of the NPRM, instead focusing narrowly on one single benefit it believes might occur (wider distribution of set-top boxes from third-parties) despite the consistent failure of similar FCC efforts in the past.

All of the foregoing leads to a final question: At what point do the costs of these rules finally outweigh the perceived benefits? On the one hand are legal questions of infringement, inducements to violate agreements, and disruptions of complex contractual ecosystems supporting content creation. On the other hand are the presence of more boxes and apps that allow users to choose who gets to draw the UI for their video content…. At some point the Commission needs to take seriously the costs of its actions, and determine whether the public interest is really served by the proposed rules.

Our full comments are available here.