Archives For vertical restraints

What happened

Today, following a six year investigation into Google’s business practices in India, the Competition Commission of India (CCI) issued its ruling.

Two things, in particular, are remarkable about the decision. First, while the CCI’s staff recommended a finding of liability on a litany of claims (the exact number is difficult to infer from the Commission’s decision, but it appears to be somewhere in the double digits), the Commission accepted its staff’s recommendation on only three — and two of those involve conduct no longer employed by Google.

Second, nothing in the Commission’s finding of liability or in the remedy it imposes suggests it approaches the issue as the EU does. To be sure, the CCI employs rhetoric suggesting that “search bias” can be anticompetitive. But its focus remains unwaveringly on the welfare of the consumer, not on the hyperbolic claims of Google’s competitors.

What didn’t happen

In finding liability on only a single claim involving ongoing practices — the claim arising from Google’s “unfair” placement of its specialized flight search (Google Flights) results — the Commission also roundly rejected a host of other claims (more than once with strong words directed at its staff for proposing such woefully unsupported arguments). Among these are several that have been raised (and unanimously rejected) by competition regulators elsewhere in the world. These claims related to a host of Google’s practices, including:

  • Search bias involving the treatment of specialized Google content (like Google Maps, YouTube, Google Reviews, etc.) other than Google Flights
  • Search bias involving the display of Universal Search results (including local search, news search, image search, etc.), except where these results are fixed to a specific position on every results page (as was the case in India before 2010), instead of being inserted wherever most appropriate in context
  • Search bias involving OneBox results (instant answers to certain queries that are placed at the top of search results pages), even where answers are drawn from Google’s own content and specific, licensed sources (rather than from crawling the web)
  • Search bias involving sponsored, vertical search results (e.g., Google Shopping results) other than Google Flights. These results are not determined by the same algorithm that returns organic results, but are instead more like typical paid search advertising results that sometimes appear at the top of search results pages. The Commission did find that Google’s treatment of its Google Flight results (another form of sponsored result) violated India’s competition laws
  • The operation of Google’s advertising platform (AdWords), including the use of a “Quality Score” in its determination of an ad’s relevance (something Josh Wright and I discuss at length here)
  • Google’s practice of allowing advertisers to bid on trademarked keywords
  • Restrictions placed by Google upon the portability of advertising campaign data to other advertising platforms through its AdWords API
  • Distribution agreements that set Google as the default (but not exclusive) search engine on certain browsers
  • Certain restrictions in syndication agreements with publishers (websites) through which Google provides search and/or advertising (Google’s AdSense offering). The Commission found that negotiated search agreements that require Google to be the exclusive search provider on certain sites did violate India’s competition laws. It should be noted, however, that Google has very few of these agreements, and no longer enters into them, so the finding is largely historical. All of the other assertions regarding these agreements (and there were numerous claims involving a number of clauses in a range of different agreements) were rejected by the Commission.

Just like competition authorities in the US, Canada, and Taiwan that have properly focused on consumer welfare in their Google investigations, the CCI found important consumer benefits from these practices that outweigh any inconveniences they may impose on competitors. And, just as in those jurisdictions, all of them were rejected by the Commission.

Still improperly assessing Google’s dominance

The biggest problem with the CCI’s decision is its acceptance — albeit moderated in important ways — of the notion that Google owes a special duty to competitors given its position as an alleged “gateway” to the Internet:

In the present case, since Google is the gateway to the internet for a vast majority of internet users, due to its dominance in the online web search market, it is under an obligation to discharge its special responsibility. As Google has the ability and the incentive to abuse its dominant position, its “special responsibility” is critical in ensuring not only the fairness of the online web search and search advertising markets, but also the fairness of all online markets given that these are primarily accessed through search engines. (para 202)

As I’ve discussed before, a proper analysis of the relevant markets in which Google operates would make clear that Google is beset by actual and potential competitors at every turn. Access to consumers by advertisers, competing search services, other competing services, mobile app developers, and the like is readily available. The lines between markets drawn by the CCI are based on superficial distinctions that are of little importance to the actual relevant market.

Consider, for example: Users seeking product information can get it via search, but also via Amazon and Facebook; advertisers can place ad copy and links in front of millions of people on search results pages, and they can also place them in front of millions of people on Facebook and Twitter. Meanwhile, many specialized search competitors like Yelp receive most of their traffic from direct navigation and from their mobile apps. In short, the assumption of market dominance made by the CCI (and so many others these days) is based on a stilted conception of the relevant market, as Google is far from the only channel through which competitors can reach consumers.

The importance of innovation in the CCI’s decision

Of course, it’s undeniable that Google is an important mechanism by which competitors reach consumers. And, crucially, nowhere did the CCI adopt Google’s critics’ and competitors’ frequently asserted position that Google is, in effect, an “essential facility” requiring extremely demanding limitations on its ability to control its product when doing so might impede its rivals.

So, while the CCI defines the relevant markets and adopts legal conclusions that confer special importance on Google’s operation of its general search results pages, it stops short of demanding that Google treat competitors on equal terms to its own offerings, as would typically be required of essential facilities (or their close cousin, public utilities).

Significantly, the Commission weighs the imposition of even these “special responsibilities” against the effects of such duties on innovation, particularly with respect to product design.

The CCI should be commended for recognizing that any obligation imposed by antitrust law on a dominant company to refrain from impeding its competitors’ access to markets must stop short of requiring the company to stop innovating, even when its product innovations might make life difficult for its competitors.

Of course, some product design choices can be, on net, anticompetitive. But innovation generally benefits consumers, and it should be impeded only where doing so clearly results in net consumer harm. Thus:

[T]he Commission is cognizant of the fact that any intervention in technology markets has to be carefully crafted lest it stifles innovation and denies consumers the benefits that such innovation can offer. This can have a detrimental effect on economic welfare and economic growth, particularly in countries relying on high growth such as India…. [P]roduct design is an important and integral dimension of competition and any undue intervention in designs of SERP [Search Engine Results Pages] may affect legitimate product improvements resulting in consumer harm. (paras 203-04).

As a consequence of this cautious approach, the CCI refused to accede to its staff’s findings of liability based on Google’s treatment of its vertical search results without considering how Google’s incorporation of these specialized results improved its product for consumers. Thus, for example:

The Commission is of opinion that requiring Google to show third-party maps may cause a delay in response time (“latency”) because these maps reside on third-party servers…. Further, requiring Google to show third-party maps may break the connection between Google’s local results and the map…. That being so, the Commission is of the view that no case of contravention of the provisions of the Act is made out in Google showing its own maps along with local search results. The Commission also holds that the same consideration would apply for not showing any other specialised result designs from third parties. (para 224 (emphasis added))

The CCI’s laudable and refreshing focus on consumer welfare

Even where the CCI determined that Google’s current practices violate India’s antitrust laws (essentially only with respect to Google Flights), it imposed a remedy that does not demand alteration of the overall structure of Google’s search results, nor its algorithmic placement of those results. In fact, the most telling indication that India’s treatment of product design innovation embodies a consumer-centric approach markedly different from that pushed by Google’s competitors (and adopted by the EU) is its remedy.

Following its finding that

[p]rominent display and placement of Commercial Flight Unit with link to Google’s specialised search options/ services (Flight) amounts to an unfair imposition upon users of search services as it deprives them of additional choices (para 420),

the CCI determined that the appropriate remedy for this defect was:

So far as the contravention noted by the Commission in respect of Flight Commercial Unit is concerned, the Commission directs Google to display a disclaimer in the commercial flight unit box indicating clearly that the “search flights” link placed at the bottom leads to Google’s Flights page, and not the results aggregated by any other third party service provider, so that users are not misled. (para 422 (emphasis added))

Indeed, what is most notable — and laudable — about the CCI’s decision is that both the alleged problem, as well as the proposed remedy, are laser-focused on the effect on consumers — not the welfare of competitors.

Where the EU’s recent Google Shopping decision considers that this sort of non-neutral presentation of Google search results harms competitors and demands equal treatment by Google of rivals seeking access to Google’s search results page, the CCI sees instead that non-neutral presentation of results could be confusing to consumers. It does not demand that Google open its doors to competitors, but rather that it more clearly identify when its product design prioritizes Google’s own content rather than determine priority based on its familiar organic search results algorithm.

This distinction is significant. For all the language in the decision asserting Google’s dominance and suggesting possible impediments to competition, the CCI does not, in fact, view Google’s design of its search results pages as a contrivance intended to exclude competitors from accessing markets.

The CCI’s remedy suggests that it has no problem with Google maintaining control over its search results pages and determining what results, and in what order, to serve to consumers. Its sole concern, rather, is that Google not get a leg up at the expense of consumers by misleading them into thinking that its product design is something that it is not.

Rather than dictate how Google should innovate or force it to perpetuate an outdated design in the name of preserving access by competitors bent on maintaining the status quo, the Commission embraces the consumer benefits of Google’s evolving products, and seeks to impose only a narrowly targeted tweak aimed directly at the quality of consumers’ interactions with Google’s products.

Conclusion

As some press accounts of the CCI’s decision trumpet, the Commission did impose liability on Google for abuse of a dominant position. But its similarity with the EU’s abuse of dominance finding ends there. The CCI rejected many more claims than it adopted, and it carefully tailored its remedy to the welfare of consumers, not the lamentations of competitors. Unlike the EU, the CCI’s finding of a violation is tempered by its concern for avoiding harmful constraints on innovation and product design, and its remedy makes this clear. Whatever the defects of India’s decision, it offers a welcome return to consumer-centric antitrust.

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.

I remain deeply skeptical of any antitrust challenge to the AT&T/Time Warner merger.  Vertical mergers like this one between a content producer and a distributor are usually efficiency-enhancing.  The theories of anticompetitive harm here rely on a number of implausible assumptions — e.g., that the combined company would raise content prices (currently set at profit-maximizing levels so that any price increase would reduce profits on content) in order to impair rivals in the distribution market and enhance profits there.  So I’m troubled that DOJ seems poised to challenge the merger.

I am, however, heartened — I think — by a speech Assistant Attorney General Makan Delrahim recently delivered at the ABA’s Antitrust Fall Forum. The crux of the speech, which is worth reading in its entirety, was that behavioral remedies — effectively having the government regulate a merged company’s day-to-day business decisions — are almost always inappropriate in merger challenges.

That used to be DOJ’s official position.  The Antitrust Division’s 2004 Remedies Guide proclaimed that “[s]tructural remedies are preferred to conduct remedies in merger cases because they are relatively clean and certain, and generally avoid costly government entanglement in the market.”

During the Obama administration, DOJ changed its tune.  Its 2011 Remedies Guide removed the statement quoted above as well as an assertion that behavioral remedies would be appropriate only in limited circumstances.  The 2011 Guide instead remained neutral on the choice between structural and conduct remedies, explaining that “[i]n certain factual circumstances, structural relief may be the best choice to preserve competition.  In a different set of circumstances, behavioral relief may be the best choice.”  The 2011 Guide also deleted the older Guide’s discussion of the limitations of conduct remedies.

Not surprisingly in light of the altered guidance, several of the Obama DOJ’s merger challenges—Ticketmaster/Live Nation, Comcast/NBC Universal, and Google/ITA Software, for example—resulted in settlements involving detailed and significant regulation of the combined firm’s conduct.  The settlements included mandatory licensing requirements, price regulation, compulsory arbitration of pricing disputes with recipients of mandated licenses, obligations to continue to develop and support certain products, the establishment of informational firewalls between divisions of the merged companies, prohibitions on price and service discrimination among customers, and various reporting requirements.

Settlements of such sort move antitrust a long way from the state of affairs described by then-professor Stephen Breyer, who wrote in his classic book Regulation and Its Reform:

[I]n principle the antitrust laws differ from classical regulation both in their aims and in their methods.  The antitrust laws seek to create or maintain the conditions of a competitive marketplace rather than replicate the results of competition or correct for the defects of competitive markets.  In doing so, they act negatively, through a few highly general provisions prohibiting certain forms of private conduct.  They do not affirmatively order firms to behave in specified ways; for the most part, they tell private firms what not to do . . . .  Only rarely do the antitrust enforcement agencies create the detailed web of affirmative legal obligations that characterizes classical regulation.

I am pleased to see Delrahim signaling a move away from behavioral remedies.  As Alden Abbott and I explained in our article, Recognizing the Limits of Antitrust: The Roberts Court Versus the Enforcement Agencies,

[C]onduct remedies present at least four difficulties from a limits of antitrust perspective.  First, they may thwart procompetitive conduct by the regulated firm.  When it comes to regulating how a firm interacts with its customers and rivals, it is extremely difficult to craft rules that will ban the bad without also precluding the good.  For example, requiring a merged firm to charge all customers the same price, a commonly imposed conduct remedy, may make it hard for the firm to serve clients who impose higher costs and may thwart price discrimination that actually enhances overall market output.  Second, conduct remedies entail significant direct implementation costs.  They divert enforcers’ attention away from ferreting out anticompetitive conduct elsewhere in the economy and require managers of regulated firms to focus on appeasing regulators rather than on meeting their customers’ desires.  Third, conduct remedies tend to grow stale.  Because competitive conditions are constantly changing, a conduct remedy that seems sensible when initially crafted may soon turn out to preclude beneficial business behavior.  Finally, by transforming antitrust enforcers into regulatory agencies, conduct remedies invite wasteful lobbying and, ultimately, destructive agency capture.

The first three of these difficulties are really aspects of F.A. Hayek’s famous knowledge problem.  I was thus particularly heartened by this part of Delrahim’s speech:

The economic liberty approach to industrial organization is also good economic policy.  F. A. Hayek won the 1974 Nobel Prize in economics for his work on the problems of central planning and the benefits of a decentralized free market system.  The price system of the free market, he explained, operates as a mechanism for communicating disaggregated information.  “[T]he ultimate decisions must be left to the people who are familiar with the[] circumstances.”  Regulation, I humbly submit in contrast, involves an arbiter unfamiliar with the circumstances that cannot possibly account for the wealth of information and dynamism that the free market incorporates.

So why the reservation in my enthusiasm?  Because eschewing conduct remedies may result in barring procompetitive mergers that might have been allowed with behavioral restraints.  If antitrust enforcers are going to avoid conduct remedies on Hayekian and Public Choice grounds, then they should challenge a merger only if they are pretty darn sure it presents a substantial threat to competition.

Delrahim appears to understand the high stakes of a “no behavioral remedies” approach to merger review:  “To be crystal clear, [having a strong presumption against conduct remedies] cuts both ways—if a merger is illegal, we should only accept a clean and complete solution, but if the merger is legal we should not impose behavioral conditions just because we can do so to expand our power and because the merging parties are willing to agree to get their merger through.”

The big question is whether the Trump DOJ will refrain from challenging mergers that do not pose a clear and significant threat to competition and consumer welfare.  On that matter, the jury is out.

Last week the editorial board of the Washington Post penned an excellent editorial responding to the European Commission’s announcement of its decision in its Google Shopping investigation. Here’s the key language from the editorial:

Whether the demise of any of [the complaining comparison shopping sites] is specifically traceable to Google, however, is not so clear. Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies. Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites…. Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

That’s actually a pretty thorough, if succinct, summary of the basic problems with the Commission’s case (based on its PR and Factsheet, at least; it hasn’t released the full decision yet).

I’ll have more to say on the decision in due course, but for now I want to elaborate on two of the points raised by the WaPo editorial board, both in service of its crucial rejoinder to the Commission that “Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies.”

First, the WaPo editorial board points out that:

Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites.

It is undoubtedly true that users “may well prefer to see a Google-generated list of vendors first.” It’s also crucial to understanding the changes in Google’s search results page that have given rise to the current raft of complaints.

As I noted in a Wall Street Journal op-ed two years ago:

It’s a mistake to consider “general search” and “comparison shopping” or “product search” to be distinct markets.

From the moment it was technologically feasible to do so, Google has been adapting its traditional search results—that familiar but long since vanished page of 10 blue links—to offer more specialized answers to users’ queries. Product search, which is what is at issue in the EU complaint, is the next iteration in this trend.

Internet users today seek information from myriad sources: Informational sites (Wikipedia and the Internet Movie Database); review sites (Yelp and TripAdvisor); retail sites (Amazon and eBay); and social-media sites (Facebook and Twitter). What do these sites have in common? They prioritize certain types of data over others to improve the relevance of the information they provide.

“Prioritization” of Google’s own shopping results, however, is the core problem for the Commission:

Google has systematically given prominent placement to its own comparison shopping service: when a consumer enters a query into the Google search engine in relation to which Google’s comparison shopping service wants to show results, these are displayed at or near the top of the search results. (Emphasis in original).

But this sort of prioritization is the norm for all search, social media, e-commerce and similar platforms. And this shouldn’t be a surprise: The value of these platforms to the user is dependent upon their ability to sort the wheat from the chaff of the now immense amount of information coursing about the Web.

As my colleagues and I noted in a paper responding to a methodologically questionable report by Tim Wu and Yelp leveling analogous “search bias” charges in the context of local search results:

Google is a vertically integrated company that offers general search, but also a host of other products…. With its well-developed algorithm and wide range of products, it is hardly surprising that Google can provide not only direct answers to factual questions, but also a wide range of its own products and services that meet users’ needs. If consumers choose Google not randomly, but precisely because they seek to take advantage of the direct answers and other options that Google can provide, then removing the sort of “bias” alleged by [complainants] would affirmatively hurt, not help, these users. (Emphasis added).

And as Josh Wright noted in an earlier paper responding to yet another set of such “search bias” charges (in that case leveled in a similarly methodologically questionable report by Benjamin Edelman and Benjamin Lockwood):

[I]t is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather individual competitors and websites. Edelman & Lockwood´s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content. However, it is not useful from an antitrust policy perspective because it erroneously—and contrary to economic theory and evidence—presumes natural and procompetitive product differentiation in search rankings to be inherently harmful. (Emphasis added).

We’ll have to see what kind of analysis the Commission relies upon in its decision to reach its conclusion that prioritization is an antitrust problem, but there is reason to be skeptical that it will turn out to be compelling. The Commission states in its PR that:

The evidence shows that consumers click far more often on results that are more visible, i.e. the results appearing higher up in Google’s search results. Even on a desktop, the ten highest-ranking generic search results on page 1 together generally receive approximately 95% of all clicks on generic search results (with the top result receiving about 35% of all the clicks). The first result on page 2 of Google’s generic search results receives only about 1% of all clicks. This cannot just be explained by the fact that the first result is more relevant, because evidence also shows that moving the first result to the third rank leads to a reduction in the number of clicks by about 50%. The effects on mobile devices are even more pronounced given the much smaller screen size.

This means that by giving prominent placement only to its own comparison shopping service and by demoting competitors, Google has given its own comparison shopping service a significant advantage compared to rivals. (Emphasis added).

Whatever truth there is in the characterization that placement is more important than relevance in influencing user behavior, the evidence cited by the Commission to demonstrate that doesn’t seem applicable to what’s happening on Google’s search results page now.

Most crucially, the evidence offered by the Commission refers only to how placement affects clicks on “generic search results” and glosses over the fact that the “prominent placement” of Google’s “results” is not only a difference in position but also in the type of result offered.

Google Shopping results (like many of its other “vertical results” and direct answers) are very different than the 10 blue links of old. These “universal search” results are, for one thing, actual answers rather than merely links to other sites. They are also more visually rich and attractively and clearly displayed.

Ironically, Tim Wu and Yelp use the claim that users click less often on Google’s universal search results to support their contention that increased relevance doesn’t explain Google’s prioritization of its own content. Yet, as we note in our response to their study:

[I]f a consumer is using a search engine in order to find a direct answer to a query rather than a link to another site to answer it, click-through would actually represent a decrease in consumer welfare, not an increase.

In fact, the study fails to incorporate this dynamic even though it is precisely what the authors claim the study is measuring.

Further, as the WaPo editorial intimates, these universal search results (including Google Shopping results) are quite plausibly more valuable to users. As even Tim Wu and Yelp note:

No one truly disagrees that universal search, in concept, can be an important innovation that can serve consumers.

Google sees it exactly this way, of course. Here’s Tim Wu and Yelp again:

According to Google, a principal difference between the earlier cases and its current conduct is that universal search represents a pro-competitive, user-serving innovation. By deploying universal search, Google argues, it has made search better. As Eric Schmidt argues, “if we know the answer it is better for us to answer that question so [the user] doesn’t have to click anywhere, and in that sense we… use data sources that are our own because we can’t engineer it any other way.”

Of course, in this case, one would expect fewer clicks to correlate with higher value to users — precisely the opposite of the claim made by Tim Wu and Yelp, which is the surest sign that their study is faulty.

But the Commission, at least according to the evidence cited in its PR, doesn’t even seem to measure the relative value of the very different presentations of information at all, instead resting on assertions rooted in the irrelevant difference in user propensity to click on generic (10 blue links) search results depending on placement.

Add to this Pinar Akman’s important point that Google Shopping “results” aren’t necessarily search results at all, but paid advertising:

[O]nce one appreciates the fact that Google’s shopping results are simply ads for products and Google treats all ads with the same ad-relevant algorithm and all organic results with the same organic-relevant algorithm, the Commission’s order becomes impossible to comprehend. Is the Commission imposing on Google a duty to treat non-sponsored results in the same way that it treats sponsored results? If so, does this not provide an unfair advantage to comparison shopping sites over, for example, Google’s advertising partners as well as over Amazon, eBay, various retailers, etc…?

Randy Picker also picks up on this point:

But those Google shopping boxes are ads, Picker told me. “I can’t imagine what they’re thinking,” he said. “Google is in the advertising business. That’s how it makes its money. It has no obligation to put other people’s ads on its website.”

The bottom line here is that the WaPo editorial board does a better job characterizing the actual, relevant market dynamics in a single sentence than the Commission seems to have done in its lengthy releases summarizing its decision following seven full years of investigation.

The second point made by the WaPo editorial board to which I want to draw attention is equally important:

Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

The Commission dismisses this argument in its Factsheet:

The Commission Decision concerns the effect of Google’s practices on comparison shopping markets. These offer a different service to merchant platforms, such as Amazon and eBay. Comparison shopping services offer a tool for consumers to compare products and prices online and find deals from online retailers of all types. By contrast, they do not offer the possibility for products to be bought on their site, which is precisely the aim of merchant platforms. Google’s own commercial behaviour reflects these differences – merchant platforms are eligible to appear in Google Shopping whereas rival comparison shopping services are not.

But the reality is that “comparison shopping,” just like “general search,” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google (or Foundem, or Amazon, or Facebook…) happens to use doesn’t reflect the extent of substitutability between these different mechanisms.

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive. The same goes for comparison shopping.

And the fact that Amazon and eBay “offer the possibility for products to be bought on their site” doesn’t take away from the fact that they also “offer a tool for consumers to compare products and prices online and find deals from online retailers of all types.” Not only do these sites contain enormous amounts of valuable (and well-presented) information about products, including product comparisons and consumer reviews, but they also actually offer comparisons among retailers. In fact, Fifty percent of the items sold through Amazon’s platform, for example, are sold by third-party retailers — the same sort of retailers that might also show up on a comparison shopping site.

More importantly, though, as the WaPo editorial rightly notes, “[t]hose who aren’t happy anyway have other options.” Google just isn’t the indispensable gateway to the Internet (and definitely not to shopping on the Internet) that the Commission seems to think.

Today over half of product searches in the US start on Amazon. The majority of web page referrals come from Facebook. Yelp’s most engaged users now access it via its app (which has seen more than 3x growth in the past five years). And a staggering 40 percent of mobile browsing on both Android and iOS now takes place inside the Facebook app.

Then there are “closed” platforms like the iTunes store and innumerable other apps that handle copious search traffic (including shopping-related traffic) but also don’t figure in the Commission’s analysis, apparently.

In fact, billions of users reach millions of companies every day through direct browser navigation, social media, apps, email links, review sites, blogs, and countless other means — all without once touching Google.com. So-called “dark social” interactions (email, text messages, and IMs) drive huge amounts of some of the most valuable traffic on the Internet, in fact.

All of this, in turn, has led to a competitive scramble to roll out completely new technologies to meet consumers’ informational (and merchants’ advertising) needs. The already-arriving swarm of VR, chatbots, digital assistants, smart-home devices, and more will offer even more interfaces besides Google through which consumers can reach their favorite online destinations.

The point is this: Google’s competitors complaining that the world is evolving around them don’t need to rely on Google. That they may choose to do so does not saddle Google with an obligation to ensure that they can always do so.

Antitrust laws — in Europe, no less than in the US — don’t require Google or any other firm to make life easier for competitors. That’s especially true when doing so would come at the cost of consumer-welfare-enhancing innovations. The Commission doesn’t seem to have grasped this fundamental point, however.

The WaPo editorial board gets it, though:

The immense size and power of all Internet giants are a legitimate focus for the antitrust authorities on both sides of the Atlantic. Brussels vs. Google, however, seems to be a case of punishment without crime.

Regardless of the merits and soundness (or lack thereof) of this week’s European Commission Decision in the Google Shopping case — one cannot assess this until we have the text of the decision — two comments really struck me during the press conference.

First, it was said that Google’s conduct had essentially reduced innovation. If I heard correctly, this is a formidable statement. In 2016, another official EU service published stats that described Alphabet as increasing its R&D by 22% and ranked it as the world’s 4th top R&D investor. Sure it can always be better. And sure this does not excuse everything. But still. The press conference language on incentives to innovate was a bit of an oversell, to say the least.

Second, the Commission views this decision as a “precedent” or as a “framework” that will inform the way dominant Internet platforms should display, intermediate and market their services and those of their competitors. This may fuel additional complaints by other vertical search rivals against (i) Google in relation to other product lines, but also against (ii) other large platform players.

Beyond this, the Commission’s approach raises a gazillion questions of law and economics. Pending the disclosure of the economic evidence in the published decision, let me share some thoughts on a few (arbitrarily) selected legal issues.

First, the Commission has drawn the lesson of the Microsoft remedy quagmire. The Commission refrains from using a trustee to ensure compliance with the decision. This had been a bone of contention in the 2007 Microsoft appeal. Readers will recall that the Commission had imposed on Microsoft to appoint a monitoring trustee, who was supposed to advise on possible infringements in the implementation of the decision. On appeal, the Court eventually held that the Commission was solely responsible for this, and could not delegate those powers. Sure, the Commission could “retai[n] its own external expert to provide advice when it investigates the implementation of the remedies.” But no more than that.

Second, we learn that the Commission is no longer in the business of software design. Recall the failed untying of WMP and Windows — Windows Naked sold only 11,787 copies, likely bought by tech bootleggers willing to acquire the first piece of software ever designed by antitrust officials — or the browser “Choice Screen” compliance saga which eventually culminated with a €561 million fine. Nothing of this can be found here. The Commission leaves remedial design to the abstract concept of “equal treatment”.[1] This, certainly, is a (relatively) commendable approach, and one that could inspire remedies in other unilateral conduct cases, in particular, exploitative conduct ones where pricing remedies are both costly, impractical, and consequentially inefficient.

On the other hand, readers will also not fail to see the corollary implication of “equal treatment”: search neutrality could actually cut both ways, and lead to a lawful degradation in consumer welfare if Google were ever to decide to abandon rich format displays for both its own shopping services and those of rivals.

Third, neither big data nor algorithmic design is directly vilified in the case (“The Commission Decision does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages”). In fact, the Commission objects to the selective application of Google’s generic search algorithms to its own products. This is an interesting, and subtle, clarification given all the coverage that this topic has attracted in recent antitrust literature. We are in fact very close to a run of the mill claim of disguised market manipulation, not causally related to data or algorithmic technology.

Fourth, Google said it contemplated a possible appeal of the decision. Now, here’s a challenging question: can an antitrust defendant effectively exercise its right to judicial review of an administrative agency (and more generally its rights of defense), when it operates under the threat of antitrust sanctions in ongoing parallel cases investigated by the same agency (i.e., the antitrust inquiries related to Android and Ads)? This question cuts further than the Google Shopping case. Say firm A contemplates a merger with firm B in market X, while it is at the same time subject to antitrust investigations in market Z. And assume that X and Z are neither substitutes nor complements so there is little competitive relationship between both products. Can the Commission leverage ongoing antitrust investigations in market Z to extract merger concessions in market X? Perhaps more to the point, can the firm interact with the Commission as if the investigations are completely distinct, or does it have to play a more nuanced game and consider the ramifications of its interactions with the Commission in both markets?

Fifth, as to the odds of a possible appeal, I don’t believe that arguments on the economic evidence or legal theory of liability will ever be successful before the General Court of the EU. The law and doctrine in unilateral conduct cases are disturbingly — and almost irrationally — severe. As I have noted elsewhere, the bottom line in the EU case-law on unilateral conduct is to consider the genuine requirement of “harm to competition” as a rhetorical question, not an empirical one. In EU unilateral conduct law, exclusion of every and any firm is a per se concern, regardless of evidence of efficiency, entry or rivalry.

In turn, I tend to opine that Google has a stronger game from a procedural standpoint, having been left with (i) the expectation of a settlement (it played ball three times by making proposals); (ii) a corollary expectation of the absence of a fine (settlement discussions are not appropriate for cases that could end with fines); and (iii) a full seven long years of an investigatory cloud. We know from the past that EU judges like procedural issues, but like comparably less to debate the substance of the law in unilateral conduct cases. This case could thus be a test case in terms of setting boundaries on how freely the Commission can U-turn a case (the Commissioner said “take the case forward in a different way”).

Last October 26, Heritage scholar James Gattuso and I published an essay in The Daily Signal, explaining that the proposed vertical merger (a merger between firms at different stages of the distribution chain) of AT&T and Time Warner (currently undergoing Justice Department antitrust review) may have the potential to bestow substantial benefits on consumers – and that congressional calls to block it, uninformed by fact-based economic analysis, could prove detrimental to consumer welfare.  We explained:

[E]ven though the proposed union of AT&T and Time Warner is not guaranteed to benefit shareholders or consumers, that is no reason for the government to block it. Absent a strong showing of likely harm to the competitive process (which does not appear to be the case here), the government has no business interfering in corporate acquisitions.  Market forces should be allowed to sort out the welfare-enhancing transactional sheep from the unprofitable goats.  Shareholders are in a position to “vote with their feet” and reward or punish a merged company, based on information generated in the marketplace. 

[M]arket transactors are better placed and better incentivized than bureaucrats to uncover and apply the information needed to yield an efficient allocation of resources.

In short, government meddling in mergers in the absence of likely market failure (and of reason to believe that the government’s actions will yield results superior to those of an imperfect market) is a recipe for a diminution in—not an improvement in—consumer welfare.

Furthermore, by arbitrarily intervening in proposed mergers that are not anti-competitive, government disincentivizes firms from acting boldly to seek out new opportunities to create wealth and enhance the welfare of consumers.

What’s worse, the knowledge that government may intervene in mergers without regard to their likely competitive effects will prompt wasteful expenditures by special interests opposing particular transactions, causing a further diminution in economic welfare.

Unfortunately, the congressional critics of this deal are still out there, louder than ever, and, once again, need to be reminded about the dangers of unwarranted antitrust interventions – and the problem with “big is bad” rhetoric.  Scalia Law School Professor (and former Federal Trade Commissioner) Joshua Wright ably deconstructs the problems with the latest Capitol Hill  criticisms of this proposed merger, set forth in a June 21 letter to the Justice Department from eleven U.S. Senators (including Elizabeth Warren, Al Franken, and Bernie Sanders).  As Professor Wright explains in a June 26 article published by The Hill:

Over the past several decades, there has been resounding and bipartisan agreement — amongst mainstream antitrust economists, practitioners, enforcement agencies, and even politicians — that while mergers between vertically aligned companies, like AT&T and Time Warner, can in rare circumstances harm competition, they usually make consumers better off. The opposition letter is a call to disrupt that consensus with a “new” view that vertical mergers are presumptively a bad deal for consumers and violate the antitrust laws.

The call for an antitrust revolution with respect to vertical mergers should not go unanswered. Revolution actually overstates things. The “new” antitrust is really a thinly veiled attempt to return to the antitrust approach of the 1960s where everything “big” was bad and virtually all deals, vertical ones included, violated the antitrust laws. That approach gained traction in part because it is easy to develop supporting rhetoric that is inflammatory and easily digestible. . . .

[However,] [a]s a matter of fact, the overwhelming weight of economic analysis and empirical evidence serves as a much-needed dose of cold water for the fiery rhetoric in the opposition letter and the commonly held intuition that all mergers between big firms make consumers worse off. . . .

[C]onsider the conclusion of a widely cited summary of dozens of studies authored by Francine LaFontaine and Margaret Slade, two very well respected industrial organization economists (one who served as director of the U.S. Federal Trade Commission’s bureau of economics during the Obama administration). It found that “consumers are often worse off when governments require vertical separation in markets where firms would have chosen otherwise.” Or consider the conclusion of four former enforcement agency economists reviewing the same body of evidence that “there is a paucity of support for the proposition that vertical restraints [or] vertical integration are likely to harm consumers.”

This evidence by no means suggests vertical mergers are incapable of harming consumers or violating the antitrust laws. The data do suggest an evidence-based antitrust enforcement approach aimed at protecting consumers will not presume that they are harmful without careful, rigorous, and objective analysis. Antitrust analysis is — or at least should be — a fact-specific exercise. Weighing concrete economic evidence is critical when assessing mergers, particularly when assessing vertical mergers where procompetitive virtues are almost always present. . . .

The economic and legal framework for analyzing vertical mergers is well understood by the U.S. Department of Justice’s antitrust division and its staff of expert lawyers and economists. The antitrust division has not hesitated to determine an appropriate remedy in the rare instance where a vertical merger has been found likely to harm competition. The [Senators’] opposition letter is correct that a careful and rigorous analysis of the proposed acquisition is called for — as is the case with all mergers. That review process should, however, be guided by careful and objective analysis and not the fiery political rhetoric [of the Senators’ letter].

Under the leadership of soon-to-be U.S. Assistant Attorney General Makan Delrahim, an experienced antitrust lawyer and antitrust enforcement agency veteran, the Justice Department antitrust division staff will be empowered to conduct precisely that type of analysis and reach a decision that best protects competition and consumers.

Professor Wright’s excellent essay merits being read in full.

Today the International Center for Law & Economics (ICLE) Antitrust and Consumer Protection Research Program released a new white paper by Geoffrey A. Manne and Allen Gibby entitled:

A Brief Assessment of the Procompetitive Effects of Organizational Restructuring in the Ag-Biotech Industry

Over the past two decades, rapid technological innovation has transformed the industrial organization of the ag-biotech industry. These developments have contributed to an impressive increase in crop yields, a dramatic reduction in chemical pesticide use, and a substantial increase in farm profitability.

One of the most striking characteristics of this organizational shift has been a steady increase in consolidation. The recent announcements of mergers between Dow and DuPont, ChemChina and Syngenta, and Bayer and Monsanto suggest that these trends are continuing in response to new market conditions and a marked uptick in scientific and technological advances.

Regulators and industry watchers are often concerned that increased consolidation will lead to reduced innovation, and a greater incentive and ability for the largest firms to foreclose competition and raise prices. But ICLE’s examination of the underlying competitive dynamics in the ag-biotech industry suggests that such concerns are likely unfounded.

In fact, R&D spending within the seeds and traits industry increased nearly 773% between 1995 and 2015 (from roughly $507 million to $4.4 billion), while the combined market share of the six largest companies in the segment increased by more than 550% (from about 10% to over 65%) during the same period.

Firms today are consolidating in order to innovate and remain competitive in an industry replete with new entrants and rapidly evolving technological and scientific developments.

According to ICLE’s analysis, critics have unduly focused on the potential harms from increased integration, without properly accounting for the potential procompetitive effects. Our brief white paper highlights these benefits and suggests that a more nuanced and restrained approach to enforcement is warranted.

Our analysis suggests that, as in past periods of consolidation, the industry is well positioned to see an increase in innovation as these new firms unite complementary expertise to pursue more efficient and effective research and development. They should also be better able to help finance, integrate, and coordinate development of the latest scientific and technological developments — particularly in rapidly growing, data-driven “digital farming” —  throughout the industry.

Download the paper here.

And for more on the topic, revisit TOTM’s recent blog symposium, “Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries,” here.

According to Cory Doctorow over at Boing Boing, Tim Wu has written an open letter to W3C Chairman Sir Timothy Berners-Lee, expressing concern about a proposal to include Encrypted Media Extensions (EME) as part of the W3C standards. W3C has a helpful description of EME:

Encrypted Media Extensions (EME) is currently a draft specification… [for] an Application Programming Interface (API) that enables Web applications to interact with content protection systems to allow playback of encrypted audio and video on the Web. The EME specification enables communication between Web browsers and digital rights management (DRM) agent software to allow HTML5 video play back of DRM-wrapped content such as streaming video services without third-party media plugins. This specification does not create nor impose a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems.

Wu’s letter expresses his concern about hardwiring DRM into the technical standards supporting an open internet. He writes:

I wanted to write to you and respectfully ask you to seriously consider extending a protective covenant to legitimate circumventers who have cause to bypass EME, should it emerge as a W3C standard.

Wu asserts that this “protective covenant” is needed because, without it, EME will confer too much power on internet “chokepoints”:

The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected…. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know… It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.”

But conflating the Microsoft case with a relatively simple browser feature meant to enable all content providers to use any third-party DRM to secure their content — in other words, to enhance interoperability — is beyond the pale. If we take the Microsoft case as Wu would like, it was about one firm controlling, far and away, the largest share of desktop computing installations, a position that Wu and his fellow travelers believed gave Microsoft an unreasonable leg up in forcing usage of Internet Explorer to the exclusion of Netscape. With EME, the W3C is not maneuvering the standard so that a single DRM provider comes to protect all content on the web, or could even hope to do so. EME enables content distributors to stream content through browsers using their own DRM backend. There is simply nothing in that standard that enables a firm to dominate content distribution or control huge swaths of the Internet to the exclusion of competitors.

Unless, of course, you just don’t like DRM and you think that any technology that enables content producers to impose restrictions on consumption of media creates a “chokepoint.” But, again, this position is borderline nonsense. Such a “chokepoint” is no more restrictive than just going to Netflix’s app (or Hulu’s, or HBO’s, or Xfinity’s, or…) and relying on its technology. And while it is no more onerous than visiting Netflix’s app, it creates greater security on the open web such that copyright owners don’t need to resort to proprietary technologies and apps for distribution. And, more fundamentally, Wu’s position ignores the role that access and usage controls are playing in creating online markets through diversified product offerings

Wu appears to believe, or would have his readers believe, that W3C is considering the adoption of a mandatory standard that would modify core aspects of the network architecture, and that therefore presents novel challenges to the operation of the internet. But this is wrong in two key respects:

  1. Except in the extremely limited manner as described below by the W3C, the EME extension does not contain mandates, and is designed only to simplify the user experience in accessing content that would otherwise require plug-ins; and
  2. These extensions are already incorporated into the major browsers. And of course, most importantly for present purposes, the standard in no way defines or harmonizes the use of DRM.

The W3C has clearly and succinctly explained the operation of the proposed extension:

The W3C is not creating DRM policies and it is not requiring that HTML use DRM. Organizations choose whether or not to have DRM on their content. The EME API can facilitate communication between browsers and DRM providers but the only mandate is not DRM but a form of key encryption (Clear Key). EME allows a method of playback of encrypted content on the Web but W3C does not make the DRM technology nor require it. EME is an extension. It is not required for HTML nor HMTL5 video.

Like many internet commentators, Tim Wu fundamentally doesn’t like DRM, and his position here would appear to reflect his aversion to DRM rather than a response to the specific issues before the W3C. Interestingly, in arguing against DRM nearly a decade ago, Wu wrote:

Finally, a successful locking strategy also requires intense cooperation between many actors – if you protect a song with “superlock,” and my CD player doesn’t understand that, you’ve just created a dead product. (Emphasis added)

In other words, he understood the need for agreements in vertical distribution chains in order to properly implement protection schemes — integration that he opposes here (not to suggest that he supported them then, but only to highlight the disconnect between recognizing the need for coordination and simultaneously trying to prevent it).

Vint Cerf (himself no great fan of DRM — see here, for example) has offered a number of thoughtful responses to those, like Wu, who have objected to the proposed standard. Cerf writes on the ISOC listserv:

EMEi is plainly very general. It can be used to limit access to virtually any digital content, regardless of IPR status. But, in some sense, anyone wishing to restrict access to some service/content is free to do so (there are other means such as login access control, end/end encryption such as TLS or IPSEC or QUIC). EME is yet another method for doing that. Just because some content is public domain does not mean that every use of it must be unprotected, does it?

And later in the thread he writes:

Just because something is public domain does not mean someone can’t lock it up. Presumably there will be other sources that are not locked. I can lock up my copy of Gulliver’s Travels and deny you access except by some payment, but if it is public domain someone else may have a copy you can get. In any case, you can’t deny others the use of the content IF THEY HAVE IT. You don’t have to share your copy of public domain with anyone if you don’t want to.

Just so. It’s pretty hard to see the competition problems that could arise from facilitating more content providers making content available on the open web.

In short, Wu wants the W3C to develop limitations on rules when there are no relevant rules to modify. His dislike of DRM obscures his vision of the limited nature of the EME proposal which would largely track, rather than lead, the actions already being undertaken by the principal commercial actors on the internet, and which merely creates a structure for facilitating voluntary commercial transactions in ways that enhance the user experience.

The W3C process will not, as Wu intimates, introduce some pernicious, default protection system that would inadvertently lock down content; rather, it would encourage the development of digital markets on the open net rather than (or in addition to) through the proprietary, vertical markets where they are increasingly found today. Wu obscures reality rather than illuminating it through his poorly considered suggestion that EME will somehow lead to a new set of defaults that threaten core freedoms.

Finally, we can’t help but comment on Wu’s observation that

My larger point is that I think the history of the anti-circumvention laws suggests is (sic) hard to predict how [freedom would be affected]– no one quite predicted the inkjet market would be affected. But given the power of those laws, the potential for anti-competitive consequences certainly exists.

Let’s put aside the fact that W3C is not debating the laws surrounding circumvention, nor, as noted, developing usage rules. It remains troubling that Wu’s belief there are sometimes unintended consequences of actions (and therefore a potential for harm) would be sufficient to lead him to oppose a change to the status quo — as if any future, potential risk necessarily outweighs present, known harms. This is the Precautionary Principle on steroids. The EME proposal grew out of a desire to address impediments that prevent the viability and growth of online markets that sufficiently ameliorate the non-hypothetical harms of unauthorized uses. The EME proposal is a modest step towards addressing a known universe. A small step, but something to celebrate, not bemoan.

Geoffrey A. Manne is Executive Director of the International Center for Law & Economics

Dynamic versus static competition

Ever since David Teece and coauthors began writing about antitrust and innovation in high-tech industries in the 1980s, we’ve understood that traditional, price-based antitrust analysis is not intrinsically well-suited for assessing merger policy in these markets.

For high-tech industries, performance, not price, is paramount — which means that innovation is key:

Competition in some markets may take the form of Schumpeterian rivalry in which a succession of temporary monopolists displace one another through innovation. At any one time, there is little or no head-to-head price competition but there is significant ongoing innovation competition.

Innovative industries are often marked by frequent disruptions or “paradigm shifts” rather than horizontal market share contests, and investment in innovation is an important signal of competition. And competition comes from the continual threat of new entry down the road — often from competitors who, though they may start with relatively small market shares, or may arise in different markets entirely, can rapidly and unexpectedly overtake incumbents.

Which, of course, doesn’t mean that current competition and ease of entry are irrelevant. Rather, because, as Joanna Shepherd noted, innovation should be assessed across the entire industry and not solely within merging firms, conduct that might impede new, disruptive, innovative entry is indeed relevant.

But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.

In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”

What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.

We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.

As Katz and Shelanski note:

To assess fully the impact of a merger on market performance, merger authorities and courts must examine how a proposed transaction changes market participants’ incentives and abilities to undertake investments in innovation.

At the same time, they point out that

Innovation can dramatically affect the relationship between the pre-merger marketplace and what is likely to happen if the proposed merger is consummated…. [This requires consideration of] how innovation will affect the evolution of market structure and competition. Innovation is a force that could make static measures of market structure unreliable or irrelevant, and the effects of innovation may be highly relevant to whether a merger should be challenged and to the kind of remedy antitrust authorities choose to adopt. (Emphasis added).

Dynamic competition in the ag-biotech industry

These dynamics seem to be playing out in the ag-biotech industry. (For a detailed look at how the specific characteristics of innovation in the ag-biotech industry have shaped industry structure, see, e.g., here (pdf)).  

One inconvenient truth for the “concentration reduces innovation” crowd is that, as the industry has experienced more consolidation, it has also become more, not less, productive and innovative. Between 1995 and 2015, for example, the market share of the largest seed producers and crop protection firms increased substantially. And yet, over the same period, annual industry R&D spending went up nearly 750 percent. Meanwhile, the resulting innovations have increased crop yields by 22%, reduced chemical pesticide use by 37%, and increased farmer profits by 68%.

In her discussion of the importance of considering the “innovation ecosystem” in assessing the innovation effects of mergers in R&D-intensive industries, Joanna Shepherd noted that

In many consolidated firms, increases in efficiency and streamlining of operations free up money and resources to source external innovation. To improve their future revenue streams and market share, consolidated firms can be expected to use at least some of the extra resources to acquire external innovation. This increase in demand for externally-sourced innovation increases the prices paid for external assets, which, in turn, incentivizes more early-stage innovation in small firms and biotech companies. Aggregate innovation increases in the process!

The same dynamic seems to play out in the ag-biotech industry, as well:

The seed-biotechnology industry has been reliant on small and medium-sized enterprises (SMEs) as sources of new innovation. New SME startups (often spinoffs from university research) tend to specialize in commercial development of a new research tool, genetic trait, or both. Significant entry by SMEs into the seed-biotechnology sector began in the late 1970s and early 1980s, with a second wave of new entrants in the late 1990s and early 2000s. In recent years, exits have outnumbered entrants, and by 2008 just over 30 SMEs specializing in crop biotechnology were still active. The majority of the exits from the industry were the result of acquisition by larger firms. Of 27 crop biotechnology SMEs that were acquired between 1985 and 2009, 20 were acquired either directly by one of the Big 6 or by a company that itself was eventually acquired by a Big 6 company.

While there is more than one way to interpret these statistics (and they are often used by merger opponents, in fact, to lament increasing concentration), they are actually at least as consistent with an increase in innovation through collaboration (and acquisition) as with a decrease.

For what it’s worth, this is exactly how the startup community views the innovation ecosystem in the ag-biotech industry, as well. As the latest AgFunder AgTech Investing Report states:

The large agribusinesses understand that new innovation is key to their future, but the lack of M&A [by the largest agribusiness firms in 2016] highlighted their uncertainty about how to approach it. They will need to make more acquisitions to ensure entrepreneurs keep innovating and VCs keep investing.

It’s also true, as Diana Moss notes, that

Competition maximizes the potential for numerous collaborations. It also minimizes incentives to refuse to license, to impose discriminatory restrictions in technology licensing agreements, or to tacitly “agree” not to compete…. All of this points to the importance of maintaining multiple, parallel R&D pipelines, a notion that was central to the EU’s decision in Dow-DuPont.

And yet collaboration and licensing have long been prevalent in this industry. Examples are legion, but here are just a few significant ones:

  • Monsanto’s “global licensing agreement for the use of the CRISPR-Cas genome-editing technology in agriculture with the Broad Institute of MIT and Harvard.”
  • Dow and Arcadia Biosciences’ “strategic collaboration to develop and commercialize new breakthrough yield traits and trait stacks in corn.”
  • Monsanto and the University of Nebraska-Lincoln’s “licensing agreement to develop crops tolerant to the broadleaf herbicide dicamba. This agreement is based on discoveries by UNL plant scientists.”

Both large and small firms in the ag-biotech industry continually enter into new agreements like these. See, e.g., here and here for a (surely incomplete) list of deals in 2016 alone.

At the same time, across the industry, new entry has been rampant despite increased M&A activity among the largest firms. Recent years have seen venture financing in AgTech skyrocket — from $400 million in 2010 to almost $5 billion in 2015 — and hundreds of startups now enter the industry annually.

The pending mergers

Today’s pending mergers are consistent with this characterization of a dynamic market in which structure is being driven by incentives to innovate, rather than monopolize. As Michael Sykuta points out,

The US agriculture sector has been experiencing consolidation at all levels for decades, even as the global ag economy has been growing and becoming more diverse. Much of this consolidation has been driven by technological changes that created economies of scale, both at the farm level and beyond.

These deals aren’t fundamentally about growing production capacity, expanding geographic reach, or otherwise enhancing market share; rather, each is a fundamental restructuring of the way the companies do business, reflecting today’s shifting agricultural markets, and the advanced technology needed to respond to them.

Technological innovation is unpredictable, often serendipitous, and frequently transformative of the ways firms organize and conduct their businesses. A company formed to grow and sell hybrid seeds in the 1920s, for example, would either have had to evolve or fold by the end of the century. Firms today will need to develop (or purchase) new capabilities and adapt to changing technology, scientific knowledge, consumer demand, and socio-political forces. The pending mergers seemingly fit exactly this mold.

As Allen Gibby notes, these mergers are essentially vertical combinations of disparate, specialized pieces of an integrated whole. Take the proposed Bayer/Monsanto merger, for example. Bayer is primarily a chemicals company, developing advanced chemicals to protect crops and enhance crop growth. Monsanto, on the other hand, primarily develops seeds and “seed traits” — advanced characteristics that ensure the heartiness of the seeds, give them resistance to herbicides and pesticides, and speed their fertilization and growth. In order to translate the individual advances of each into higher yields, it is important that these two functions work successfully together. Doing so enhances crop growth and protection far beyond what, say, spreading manure can accomplish — or either firm could accomplish working on its own.

The key is that integrated knowledge is essential to making this process function. Developing seed traits to work well with (i.e., to withstand) certain pesticides requires deep knowledge of the pesticide’s chemical characteristics, and vice-versa. Processing huge amounts of data to determine when to apply chemical treatments or to predict a disease requires not only that the right information is collected, at the right time, but also that it is analyzed in light of the unique characteristics of the seeds and chemicals. Increased communications and data-sharing between manufacturers increases the likelihood that farmers will use the best products available in the right quantity and at the right time in each field.

Vertical integration solves bargaining and long-term planning problems by unifying the interests (and the management) of these functions. Instead of arm’s length negotiation, a merged Bayer/Monsanto, for example, may better maximize R&D of complicated Ag/chem products through fully integrated departments and merged areas of expertise. A merged company can also coordinate investment decisions (instead of waiting up to 10 years to see what the other company produces), avoid duplication of research, adapt to changing conditions (and the unanticipated course of research), pool intellectual property, and bolster internal scientific capability more efficiently. All told, the merged company projects spending about $16 billion on R&D over the next six years. Such coordinated investment will likely garner far more than either company could from separately spending even the same amount to develop new products. 

Controlling an entire R&D process and pipeline of traits for resistance, chemical treatments, seeds, and digital complements would enable the merged firm to better ensure that each of these products works together to maximize crop yields, at the lowest cost, and at greater speed. Consider the advantages that Apple’s tightly-knit ecosystem of software and hardware provides to computer and device users. Such tight integration isn’t the only way to compete (think Android), but it has frequently proven to be a successful model, facilitating some functions (e.g., handoff between Macs and iPhones) that are difficult if not impossible in less-integrated systems. And, it bears noting, important elements of Apple’s innovation have come through acquisition….

Conclusion

As LaFontaine and Slade have made clear, theoretical concerns about the anticompetitive consequences of vertical integrations are belied by the virtual absence of empirical support:

Under most circumstances, profit–maximizing vertical–integration and merger decisions are efficient, not just from the firms’ but also from the consumers’ points of view.

Other antitrust scholars are skeptical of vertical-integration fears because firms normally have strong incentives to deal with providers of complementary products. Bayer and Monsanto, for example, might benefit enormously from integration, but if competing seed producers seek out Bayer’s chemicals to develop competing products, there’s little reason for the merged firm to withhold them: Even if the new seeds out-compete Monsanto’s, Bayer/Monsanto can still profit from providing the crucial input. Its incentive doesn’t necessarily change if the merger goes through, and whatever “power” Bayer has as an input is a function of its scientific know-how, not its merger with Monsanto.

In other words, while some competitors could find a less hospitable business environment, consumers will likely suffer no apparent ill effects, and continue to receive the benefits of enhanced product development and increased productivity.

That’s what we’d expect from innovation-driven integration, and antitrust enforcers should be extremely careful before thwarting or circumscribing these mergers lest they end up thwarting, rather than promoting, consumer welfare.